Oct 9 00:48:24.890935 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 9 00:48:24.890955 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Oct 8 23:34:40 -00 2024 Oct 9 00:48:24.890964 kernel: KASLR enabled Oct 9 00:48:24.890970 kernel: efi: EFI v2.7 by EDK II Oct 9 00:48:24.890975 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 9 00:48:24.890981 kernel: random: crng init done Oct 9 00:48:24.890988 kernel: secureboot: Secure boot disabled Oct 9 00:48:24.890994 kernel: ACPI: Early table checksum verification disabled Oct 9 00:48:24.891000 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 9 00:48:24.891008 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 9 00:48:24.891017 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:48:24.891024 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:48:24.891030 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:48:24.891036 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:48:24.891102 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:48:24.891111 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:48:24.891118 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:48:24.891124 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:48:24.891130 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:48:24.891137 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 9 00:48:24.891143 kernel: NUMA: Failed to initialise from firmware Oct 9 00:48:24.891150 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 00:48:24.891156 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Oct 9 00:48:24.891163 kernel: Zone ranges: Oct 9 00:48:24.891169 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 00:48:24.891176 kernel: DMA32 empty Oct 9 00:48:24.891183 kernel: Normal empty Oct 9 00:48:24.891189 kernel: Movable zone start for each node Oct 9 00:48:24.891195 kernel: Early memory node ranges Oct 9 00:48:24.891201 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 9 00:48:24.891208 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 9 00:48:24.891214 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 9 00:48:24.891221 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 9 00:48:24.891227 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 9 00:48:24.891233 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 9 00:48:24.891239 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 9 00:48:24.891246 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 00:48:24.891254 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 9 00:48:24.891260 kernel: psci: probing for conduit method from ACPI. Oct 9 00:48:24.891266 kernel: psci: PSCIv1.1 detected in firmware. Oct 9 00:48:24.891275 kernel: psci: Using standard PSCI v0.2 function IDs Oct 9 00:48:24.891282 kernel: psci: Trusted OS migration not required Oct 9 00:48:24.891289 kernel: psci: SMC Calling Convention v1.1 Oct 9 00:48:24.891297 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 9 00:48:24.891304 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 9 00:48:24.891310 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 9 00:48:24.891317 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 9 00:48:24.891324 kernel: Detected PIPT I-cache on CPU0 Oct 9 00:48:24.891331 kernel: CPU features: detected: GIC system register CPU interface Oct 9 00:48:24.891338 kernel: CPU features: detected: Hardware dirty bit management Oct 9 00:48:24.891344 kernel: CPU features: detected: Spectre-v4 Oct 9 00:48:24.891351 kernel: CPU features: detected: Spectre-BHB Oct 9 00:48:24.891358 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 9 00:48:24.891366 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 9 00:48:24.891373 kernel: CPU features: detected: ARM erratum 1418040 Oct 9 00:48:24.891379 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 9 00:48:24.891386 kernel: alternatives: applying boot alternatives Oct 9 00:48:24.891394 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d2d67b5440410ae2d0aa86eba97891969be0a7a421fa55f13442706ef7ed2a5e Oct 9 00:48:24.891401 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 00:48:24.891408 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 00:48:24.891415 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 00:48:24.891421 kernel: Fallback order for Node 0: 0 Oct 9 00:48:24.891428 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 9 00:48:24.891435 kernel: Policy zone: DMA Oct 9 00:48:24.891443 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 00:48:24.891450 kernel: software IO TLB: area num 4. Oct 9 00:48:24.891456 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 9 00:48:24.891463 kernel: Memory: 2386400K/2572288K available (10240K kernel code, 2184K rwdata, 8092K rodata, 39552K init, 897K bss, 185888K reserved, 0K cma-reserved) Oct 9 00:48:24.891470 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 00:48:24.891477 kernel: trace event string verifier disabled Oct 9 00:48:24.891484 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 00:48:24.891491 kernel: rcu: RCU event tracing is enabled. Oct 9 00:48:24.891498 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 00:48:24.891505 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 00:48:24.891512 kernel: Tracing variant of Tasks RCU enabled. Oct 9 00:48:24.891519 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 00:48:24.891527 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 00:48:24.891534 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 9 00:48:24.891540 kernel: GICv3: 256 SPIs implemented Oct 9 00:48:24.891547 kernel: GICv3: 0 Extended SPIs implemented Oct 9 00:48:24.891554 kernel: Root IRQ handler: gic_handle_irq Oct 9 00:48:24.891560 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 9 00:48:24.891567 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 9 00:48:24.891574 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 9 00:48:24.891581 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 9 00:48:24.891588 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 9 00:48:24.891595 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 9 00:48:24.891602 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 9 00:48:24.891609 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 00:48:24.891616 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 00:48:24.891623 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 9 00:48:24.891630 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 9 00:48:24.891637 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 9 00:48:24.891644 kernel: arm-pv: using stolen time PV Oct 9 00:48:24.891651 kernel: Console: colour dummy device 80x25 Oct 9 00:48:24.891658 kernel: ACPI: Core revision 20230628 Oct 9 00:48:24.891665 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 9 00:48:24.891672 kernel: pid_max: default: 32768 minimum: 301 Oct 9 00:48:24.891680 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 00:48:24.891687 kernel: landlock: Up and running. Oct 9 00:48:24.891694 kernel: SELinux: Initializing. Oct 9 00:48:24.891701 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 00:48:24.891708 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 00:48:24.891715 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:48:24.891722 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:48:24.891729 kernel: rcu: Hierarchical SRCU implementation. Oct 9 00:48:24.891736 kernel: rcu: Max phase no-delay instances is 400. Oct 9 00:48:24.891744 kernel: Platform MSI: ITS@0x8080000 domain created Oct 9 00:48:24.891751 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 9 00:48:24.891758 kernel: Remapping and enabling EFI services. Oct 9 00:48:24.891765 kernel: smp: Bringing up secondary CPUs ... Oct 9 00:48:24.891772 kernel: Detected PIPT I-cache on CPU1 Oct 9 00:48:24.891778 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 9 00:48:24.891786 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 9 00:48:24.891793 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 00:48:24.891799 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 9 00:48:24.891816 kernel: Detected PIPT I-cache on CPU2 Oct 9 00:48:24.891824 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 9 00:48:24.891835 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 9 00:48:24.891843 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 00:48:24.891851 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 9 00:48:24.891858 kernel: Detected PIPT I-cache on CPU3 Oct 9 00:48:24.891865 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 9 00:48:24.891873 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 9 00:48:24.891880 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 00:48:24.891888 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 9 00:48:24.891896 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 00:48:24.891903 kernel: SMP: Total of 4 processors activated. Oct 9 00:48:24.891910 kernel: CPU features: detected: 32-bit EL0 Support Oct 9 00:48:24.891918 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 9 00:48:24.891925 kernel: CPU features: detected: Common not Private translations Oct 9 00:48:24.891933 kernel: CPU features: detected: CRC32 instructions Oct 9 00:48:24.891940 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 9 00:48:24.891948 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 9 00:48:24.891956 kernel: CPU features: detected: LSE atomic instructions Oct 9 00:48:24.891963 kernel: CPU features: detected: Privileged Access Never Oct 9 00:48:24.891971 kernel: CPU features: detected: RAS Extension Support Oct 9 00:48:24.891978 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 9 00:48:24.891985 kernel: CPU: All CPU(s) started at EL1 Oct 9 00:48:24.891993 kernel: alternatives: applying system-wide alternatives Oct 9 00:48:24.892000 kernel: devtmpfs: initialized Oct 9 00:48:24.892007 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 00:48:24.892016 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 00:48:24.892023 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 00:48:24.892031 kernel: SMBIOS 3.0.0 present. Oct 9 00:48:24.892038 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 9 00:48:24.892059 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 00:48:24.892067 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 9 00:48:24.892075 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 9 00:48:24.892083 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 9 00:48:24.892090 kernel: audit: initializing netlink subsys (disabled) Oct 9 00:48:24.892099 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Oct 9 00:48:24.892106 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 00:48:24.892114 kernel: cpuidle: using governor menu Oct 9 00:48:24.892121 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 9 00:48:24.892128 kernel: ASID allocator initialised with 32768 entries Oct 9 00:48:24.892136 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 00:48:24.892143 kernel: Serial: AMBA PL011 UART driver Oct 9 00:48:24.892151 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 9 00:48:24.892158 kernel: Modules: 0 pages in range for non-PLT usage Oct 9 00:48:24.892166 kernel: Modules: 508992 pages in range for PLT usage Oct 9 00:48:24.892174 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 00:48:24.892181 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 00:48:24.892189 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 9 00:48:24.892196 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 9 00:48:24.892203 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 00:48:24.892210 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 00:48:24.892218 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 9 00:48:24.892225 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 9 00:48:24.892233 kernel: ACPI: Added _OSI(Module Device) Oct 9 00:48:24.892241 kernel: ACPI: Added _OSI(Processor Device) Oct 9 00:48:24.892248 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 00:48:24.892256 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 00:48:24.892263 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 00:48:24.892270 kernel: ACPI: Interpreter enabled Oct 9 00:48:24.892277 kernel: ACPI: Using GIC for interrupt routing Oct 9 00:48:24.892284 kernel: ACPI: MCFG table detected, 1 entries Oct 9 00:48:24.892292 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 9 00:48:24.892299 kernel: printk: console [ttyAMA0] enabled Oct 9 00:48:24.892308 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 00:48:24.892433 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 00:48:24.892504 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 9 00:48:24.892568 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 9 00:48:24.892628 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 9 00:48:24.892690 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 9 00:48:24.892699 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 9 00:48:24.892709 kernel: PCI host bridge to bus 0000:00 Oct 9 00:48:24.892775 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 9 00:48:24.892845 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 9 00:48:24.892903 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 9 00:48:24.892958 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 00:48:24.893034 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 9 00:48:24.893165 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 00:48:24.893232 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 9 00:48:24.893296 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 9 00:48:24.893358 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 9 00:48:24.893421 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 9 00:48:24.893486 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 9 00:48:24.893554 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 9 00:48:24.893617 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 9 00:48:24.893682 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 9 00:48:24.893738 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 9 00:48:24.893748 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 9 00:48:24.893756 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 9 00:48:24.893763 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 9 00:48:24.893771 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 9 00:48:24.893778 kernel: iommu: Default domain type: Translated Oct 9 00:48:24.893788 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 9 00:48:24.893795 kernel: efivars: Registered efivars operations Oct 9 00:48:24.893809 kernel: vgaarb: loaded Oct 9 00:48:24.893817 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 9 00:48:24.893824 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 00:48:24.893832 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 00:48:24.893839 kernel: pnp: PnP ACPI init Oct 9 00:48:24.893910 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 9 00:48:24.893923 kernel: pnp: PnP ACPI: found 1 devices Oct 9 00:48:24.893930 kernel: NET: Registered PF_INET protocol family Oct 9 00:48:24.893938 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 00:48:24.893945 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 00:48:24.893953 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 00:48:24.893960 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 00:48:24.893968 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 00:48:24.893975 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 00:48:24.893983 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 00:48:24.893992 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 00:48:24.893999 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 00:48:24.894006 kernel: PCI: CLS 0 bytes, default 64 Oct 9 00:48:24.894014 kernel: kvm [1]: HYP mode not available Oct 9 00:48:24.894021 kernel: Initialise system trusted keyrings Oct 9 00:48:24.894028 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 00:48:24.894036 kernel: Key type asymmetric registered Oct 9 00:48:24.894075 kernel: Asymmetric key parser 'x509' registered Oct 9 00:48:24.894084 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 9 00:48:24.894094 kernel: io scheduler mq-deadline registered Oct 9 00:48:24.894101 kernel: io scheduler kyber registered Oct 9 00:48:24.894108 kernel: io scheduler bfq registered Oct 9 00:48:24.894116 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 9 00:48:24.894123 kernel: ACPI: button: Power Button [PWRB] Oct 9 00:48:24.894131 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 9 00:48:24.894208 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 9 00:48:24.894219 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 00:48:24.894227 kernel: thunder_xcv, ver 1.0 Oct 9 00:48:24.894234 kernel: thunder_bgx, ver 1.0 Oct 9 00:48:24.894243 kernel: nicpf, ver 1.0 Oct 9 00:48:24.894251 kernel: nicvf, ver 1.0 Oct 9 00:48:24.894323 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 9 00:48:24.894385 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-09T00:48:24 UTC (1728434904) Oct 9 00:48:24.894394 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 9 00:48:24.894402 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 9 00:48:24.894410 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 9 00:48:24.894419 kernel: watchdog: Hard watchdog permanently disabled Oct 9 00:48:24.894426 kernel: NET: Registered PF_INET6 protocol family Oct 9 00:48:24.894434 kernel: Segment Routing with IPv6 Oct 9 00:48:24.894441 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 00:48:24.894448 kernel: NET: Registered PF_PACKET protocol family Oct 9 00:48:24.894456 kernel: Key type dns_resolver registered Oct 9 00:48:24.894463 kernel: registered taskstats version 1 Oct 9 00:48:24.894471 kernel: Loading compiled-in X.509 certificates Oct 9 00:48:24.894478 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 80611b0a9480eaf6d787b908c6349fdb5d07fa81' Oct 9 00:48:24.894486 kernel: Key type .fscrypt registered Oct 9 00:48:24.894494 kernel: Key type fscrypt-provisioning registered Oct 9 00:48:24.894502 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 00:48:24.894509 kernel: ima: Allocated hash algorithm: sha1 Oct 9 00:48:24.894516 kernel: ima: No architecture policies found Oct 9 00:48:24.894524 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 9 00:48:24.894531 kernel: clk: Disabling unused clocks Oct 9 00:48:24.894539 kernel: Freeing unused kernel memory: 39552K Oct 9 00:48:24.894546 kernel: Run /init as init process Oct 9 00:48:24.894555 kernel: with arguments: Oct 9 00:48:24.894562 kernel: /init Oct 9 00:48:24.894569 kernel: with environment: Oct 9 00:48:24.894576 kernel: HOME=/ Oct 9 00:48:24.894583 kernel: TERM=linux Oct 9 00:48:24.894591 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 00:48:24.894599 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 00:48:24.894609 systemd[1]: Detected virtualization kvm. Oct 9 00:48:24.894617 systemd[1]: Detected architecture arm64. Oct 9 00:48:24.894625 systemd[1]: Running in initrd. Oct 9 00:48:24.894632 systemd[1]: No hostname configured, using default hostname. Oct 9 00:48:24.894640 systemd[1]: Hostname set to . Oct 9 00:48:24.894648 systemd[1]: Initializing machine ID from VM UUID. Oct 9 00:48:24.894655 systemd[1]: Queued start job for default target initrd.target. Oct 9 00:48:24.894663 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:48:24.894670 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:48:24.894680 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 00:48:24.894687 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 00:48:24.894695 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 00:48:24.894703 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 00:48:24.894712 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 00:48:24.894720 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 00:48:24.894728 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:48:24.894737 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:48:24.894745 systemd[1]: Reached target paths.target - Path Units. Oct 9 00:48:24.894752 systemd[1]: Reached target slices.target - Slice Units. Oct 9 00:48:24.894780 systemd[1]: Reached target swap.target - Swaps. Oct 9 00:48:24.894788 systemd[1]: Reached target timers.target - Timer Units. Oct 9 00:48:24.894796 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 00:48:24.894810 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 00:48:24.894818 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 00:48:24.894825 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 00:48:24.894843 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:48:24.894852 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 00:48:24.894860 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:48:24.894868 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 00:48:24.894876 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 00:48:24.894883 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 00:48:24.894891 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 00:48:24.894914 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 00:48:24.894924 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 00:48:24.894932 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 00:48:24.894939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:48:24.894947 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 00:48:24.894955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:48:24.894963 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 00:48:24.894973 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 00:48:24.894981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:48:24.895007 systemd-journald[237]: Collecting audit messages is disabled. Oct 9 00:48:24.895028 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:48:24.895036 systemd-journald[237]: Journal started Oct 9 00:48:24.895073 systemd-journald[237]: Runtime Journal (/run/log/journal/f07ee52e0c614a17980c5c5bbcafe0aa) is 5.9M, max 47.3M, 41.4M free. Oct 9 00:48:24.901174 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 00:48:24.901202 kernel: Bridge firewalling registered Oct 9 00:48:24.886576 systemd-modules-load[238]: Inserted module 'overlay' Oct 9 00:48:24.900458 systemd-modules-load[238]: Inserted module 'br_netfilter' Oct 9 00:48:24.904087 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:48:24.906679 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 00:48:24.906705 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 00:48:24.908910 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 00:48:24.912372 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:48:24.914180 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 00:48:24.916211 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:48:24.921452 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:48:24.922544 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:48:24.925121 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 00:48:24.930216 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:48:24.932380 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 00:48:24.938035 dracut-cmdline[275]: dracut-dracut-053 Oct 9 00:48:24.941492 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d2d67b5440410ae2d0aa86eba97891969be0a7a421fa55f13442706ef7ed2a5e Oct 9 00:48:24.959585 systemd-resolved[280]: Positive Trust Anchors: Oct 9 00:48:24.959659 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 00:48:24.959689 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 00:48:24.964616 systemd-resolved[280]: Defaulting to hostname 'linux'. Oct 9 00:48:24.965605 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 00:48:24.967243 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:48:25.009075 kernel: SCSI subsystem initialized Oct 9 00:48:25.013063 kernel: Loading iSCSI transport class v2.0-870. Oct 9 00:48:25.022071 kernel: iscsi: registered transport (tcp) Oct 9 00:48:25.033069 kernel: iscsi: registered transport (qla4xxx) Oct 9 00:48:25.033101 kernel: QLogic iSCSI HBA Driver Oct 9 00:48:25.074063 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 00:48:25.085227 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 00:48:25.102231 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 00:48:25.102280 kernel: device-mapper: uevent: version 1.0.3 Oct 9 00:48:25.102292 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 00:48:25.149072 kernel: raid6: neonx8 gen() 15773 MB/s Oct 9 00:48:25.166055 kernel: raid6: neonx4 gen() 15648 MB/s Oct 9 00:48:25.183060 kernel: raid6: neonx2 gen() 13218 MB/s Oct 9 00:48:25.200061 kernel: raid6: neonx1 gen() 10505 MB/s Oct 9 00:48:25.217060 kernel: raid6: int64x8 gen() 6958 MB/s Oct 9 00:48:25.234053 kernel: raid6: int64x4 gen() 7354 MB/s Oct 9 00:48:25.251056 kernel: raid6: int64x2 gen() 6130 MB/s Oct 9 00:48:25.268058 kernel: raid6: int64x1 gen() 5053 MB/s Oct 9 00:48:25.268072 kernel: raid6: using algorithm neonx8 gen() 15773 MB/s Oct 9 00:48:25.285079 kernel: raid6: .... xor() 11937 MB/s, rmw enabled Oct 9 00:48:25.285105 kernel: raid6: using neon recovery algorithm Oct 9 00:48:25.291192 kernel: xor: measuring software checksum speed Oct 9 00:48:25.292316 kernel: 8regs : 1674 MB/sec Oct 9 00:48:25.292328 kernel: 32regs : 19340 MB/sec Oct 9 00:48:25.293243 kernel: arm64_neon : 26892 MB/sec Oct 9 00:48:25.293259 kernel: xor: using function: arm64_neon (26892 MB/sec) Oct 9 00:48:25.344067 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 00:48:25.355111 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 00:48:25.366250 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:48:25.377244 systemd-udevd[461]: Using default interface naming scheme 'v255'. Oct 9 00:48:25.380340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:48:25.382550 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 00:48:25.397036 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Oct 9 00:48:25.421965 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 00:48:25.433200 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 00:48:25.470162 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:48:25.479200 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 00:48:25.491565 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 00:48:25.494291 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 00:48:25.495189 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:48:25.496952 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 00:48:25.508162 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 00:48:25.516392 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 00:48:25.519156 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 9 00:48:25.519318 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 00:48:25.522179 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 00:48:25.522218 kernel: GPT:9289727 != 19775487 Oct 9 00:48:25.522247 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 00:48:25.522264 kernel: GPT:9289727 != 19775487 Oct 9 00:48:25.523404 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 00:48:25.523425 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:48:25.524505 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 00:48:25.524621 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:48:25.527441 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:48:25.528344 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:48:25.528467 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:48:25.530185 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:48:25.540064 kernel: BTRFS: device fsid c25b3a2f-539f-42a7-8842-97b35e474647 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (524) Oct 9 00:48:25.542065 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (512) Oct 9 00:48:25.542268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:48:25.552076 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:48:25.562375 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 00:48:25.566403 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 00:48:25.569857 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 00:48:25.570781 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 00:48:25.575677 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 00:48:25.587168 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 00:48:25.588624 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:48:25.592741 disk-uuid[552]: Primary Header is updated. Oct 9 00:48:25.592741 disk-uuid[552]: Secondary Entries is updated. Oct 9 00:48:25.592741 disk-uuid[552]: Secondary Header is updated. Oct 9 00:48:25.599065 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:48:25.607947 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:48:26.606057 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:48:26.606624 disk-uuid[553]: The operation has completed successfully. Oct 9 00:48:26.627852 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 00:48:26.627945 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 00:48:26.651195 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 00:48:26.655599 sh[574]: Success Oct 9 00:48:26.666840 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 9 00:48:26.692716 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 00:48:26.710414 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 00:48:26.712448 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 00:48:26.722070 kernel: BTRFS info (device dm-0): first mount of filesystem c25b3a2f-539f-42a7-8842-97b35e474647 Oct 9 00:48:26.722112 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 9 00:48:26.722122 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 00:48:26.722132 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 00:48:26.723051 kernel: BTRFS info (device dm-0): using free space tree Oct 9 00:48:26.726588 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 00:48:26.727397 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 00:48:26.741202 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 00:48:26.742510 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 00:48:26.749661 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:48:26.749698 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 00:48:26.749708 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:48:26.752343 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:48:26.758751 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 00:48:26.760059 kernel: BTRFS info (device vda6): last unmount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:48:26.766121 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 00:48:26.772211 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 00:48:26.833922 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 00:48:26.844193 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 00:48:26.874890 ignition[665]: Ignition 2.19.0 Oct 9 00:48:26.874901 ignition[665]: Stage: fetch-offline Oct 9 00:48:26.874934 ignition[665]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:48:26.874942 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:48:26.875187 ignition[665]: parsed url from cmdline: "" Oct 9 00:48:26.875190 ignition[665]: no config URL provided Oct 9 00:48:26.875195 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 00:48:26.875203 ignition[665]: no config at "/usr/lib/ignition/user.ign" Oct 9 00:48:26.875230 ignition[665]: op(1): [started] loading QEMU firmware config module Oct 9 00:48:26.879936 systemd-networkd[765]: lo: Link UP Oct 9 00:48:26.875235 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 00:48:26.879940 systemd-networkd[765]: lo: Gained carrier Oct 9 00:48:26.880761 systemd-networkd[765]: Enumeration completed Oct 9 00:48:26.880987 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 00:48:26.881293 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:48:26.881296 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 00:48:26.882133 systemd-networkd[765]: eth0: Link UP Oct 9 00:48:26.882136 systemd-networkd[765]: eth0: Gained carrier Oct 9 00:48:26.882142 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:48:26.882688 systemd[1]: Reached target network.target - Network. Oct 9 00:48:26.891786 ignition[665]: op(1): [finished] loading QEMU firmware config module Oct 9 00:48:26.903077 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 00:48:26.931434 ignition[665]: parsing config with SHA512: adaebd3e7ef1cc315fdb4f578dc465efa272f8662253b194b5f7a517ad2ad3ef9a86bc47bd8582849793cc806d04cf243ea40590c400ef05b928efad77922637 Oct 9 00:48:26.937078 unknown[665]: fetched base config from "system" Oct 9 00:48:26.937088 unknown[665]: fetched user config from "qemu" Oct 9 00:48:26.937562 ignition[665]: fetch-offline: fetch-offline passed Oct 9 00:48:26.937625 ignition[665]: Ignition finished successfully Oct 9 00:48:26.939799 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 00:48:26.941208 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 00:48:26.953214 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 00:48:26.963829 ignition[771]: Ignition 2.19.0 Oct 9 00:48:26.963845 ignition[771]: Stage: kargs Oct 9 00:48:26.964009 ignition[771]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:48:26.964018 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:48:26.964915 ignition[771]: kargs: kargs passed Oct 9 00:48:26.964958 ignition[771]: Ignition finished successfully Oct 9 00:48:26.968741 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 00:48:26.979199 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 00:48:26.988267 ignition[781]: Ignition 2.19.0 Oct 9 00:48:26.988276 ignition[781]: Stage: disks Oct 9 00:48:26.988428 ignition[781]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:48:26.988442 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:48:26.989334 ignition[781]: disks: disks passed Oct 9 00:48:26.990657 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 00:48:26.989375 ignition[781]: Ignition finished successfully Oct 9 00:48:26.991659 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 00:48:26.992460 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 00:48:26.993289 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 00:48:26.994330 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 00:48:26.995596 systemd[1]: Reached target basic.target - Basic System. Oct 9 00:48:26.997532 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 00:48:27.010295 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 00:48:27.014084 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 00:48:27.015752 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 00:48:27.059059 kernel: EXT4-fs (vda9): mounted filesystem 3a4adf89-ce2b-46a9-8e1a-433a27a27d16 r/w with ordered data mode. Quota mode: none. Oct 9 00:48:27.059725 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 00:48:27.060815 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 00:48:27.076116 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 00:48:27.077580 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 00:48:27.078724 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 00:48:27.078759 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 00:48:27.085349 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Oct 9 00:48:27.085369 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:48:27.085380 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 00:48:27.085395 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:48:27.078789 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 00:48:27.088029 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:48:27.083949 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 00:48:27.087273 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 00:48:27.090114 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 00:48:27.133854 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 00:48:27.136717 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Oct 9 00:48:27.139699 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 00:48:27.143109 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 00:48:27.210095 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 00:48:27.226145 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 00:48:27.227526 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 00:48:27.232052 kernel: BTRFS info (device vda6): last unmount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:48:27.245143 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 00:48:27.248648 ignition[912]: INFO : Ignition 2.19.0 Oct 9 00:48:27.248648 ignition[912]: INFO : Stage: mount Oct 9 00:48:27.249828 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:48:27.249828 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:48:27.249828 ignition[912]: INFO : mount: mount passed Oct 9 00:48:27.249828 ignition[912]: INFO : Ignition finished successfully Oct 9 00:48:27.250923 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 00:48:27.261183 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 00:48:27.720504 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 00:48:27.730198 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 00:48:27.735796 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) Oct 9 00:48:27.735823 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:48:27.735833 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 00:48:27.737064 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:48:27.739065 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:48:27.739909 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 00:48:27.761050 ignition[945]: INFO : Ignition 2.19.0 Oct 9 00:48:27.761050 ignition[945]: INFO : Stage: files Oct 9 00:48:27.762273 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:48:27.762273 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:48:27.762273 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Oct 9 00:48:27.764846 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 00:48:27.764846 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 00:48:27.764846 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 00:48:27.764846 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 00:48:27.764846 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 00:48:27.764667 unknown[945]: wrote ssh authorized keys file for user: core Oct 9 00:48:27.770657 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 00:48:27.770657 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 00:48:27.770657 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 9 00:48:27.770657 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 9 00:48:27.815824 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 00:48:27.919633 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 9 00:48:27.919633 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 9 00:48:27.922548 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 00:48:27.922548 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 00:48:27.922548 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 00:48:27.922548 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 00:48:27.922548 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 00:48:27.922548 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 00:48:27.922548 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 00:48:27.922548 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 00:48:27.922548 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 00:48:27.922548 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 00:48:27.922548 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 00:48:27.922548 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 00:48:27.922548 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Oct 9 00:48:28.178537 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 9 00:48:28.385242 systemd-networkd[765]: eth0: Gained IPv6LL Oct 9 00:48:28.561204 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 00:48:28.561204 ignition[945]: INFO : files: op(c): [started] processing unit "containerd.service" Oct 9 00:48:28.563724 ignition[945]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 00:48:28.563724 ignition[945]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 00:48:28.563724 ignition[945]: INFO : files: op(c): [finished] processing unit "containerd.service" Oct 9 00:48:28.563724 ignition[945]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Oct 9 00:48:28.563724 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 00:48:28.563724 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 00:48:28.563724 ignition[945]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Oct 9 00:48:28.563724 ignition[945]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Oct 9 00:48:28.563724 ignition[945]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 00:48:28.563724 ignition[945]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 00:48:28.563724 ignition[945]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Oct 9 00:48:28.563724 ignition[945]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 00:48:28.584927 ignition[945]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 00:48:28.588204 ignition[945]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 00:48:28.589410 ignition[945]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 00:48:28.589410 ignition[945]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Oct 9 00:48:28.589410 ignition[945]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 00:48:28.589410 ignition[945]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 00:48:28.589410 ignition[945]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 00:48:28.589410 ignition[945]: INFO : files: files passed Oct 9 00:48:28.589410 ignition[945]: INFO : Ignition finished successfully Oct 9 00:48:28.592586 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 00:48:28.599164 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 00:48:28.600466 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 00:48:28.603218 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 00:48:28.603930 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 00:48:28.607672 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 00:48:28.609525 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:48:28.609525 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:48:28.611935 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:48:28.613598 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 00:48:28.614628 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 00:48:28.616594 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 00:48:28.636418 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 00:48:28.636530 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 00:48:28.638113 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 00:48:28.639391 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 00:48:28.640678 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 00:48:28.641321 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 00:48:28.654654 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 00:48:28.667151 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 00:48:28.674123 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:48:28.674999 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:48:28.676495 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 00:48:28.677729 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 00:48:28.677839 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 00:48:28.679643 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 00:48:28.681053 systemd[1]: Stopped target basic.target - Basic System. Oct 9 00:48:28.682256 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 00:48:28.683607 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 00:48:28.684979 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 00:48:28.686399 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 00:48:28.687719 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 00:48:28.689102 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 00:48:28.690527 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 00:48:28.691858 systemd[1]: Stopped target swap.target - Swaps. Oct 9 00:48:28.692932 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 00:48:28.693034 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 00:48:28.694737 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:48:28.696101 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:48:28.697490 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 00:48:28.698874 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:48:28.699806 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 00:48:28.699911 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 00:48:28.702078 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 00:48:28.702179 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 00:48:28.703578 systemd[1]: Stopped target paths.target - Path Units. Oct 9 00:48:28.704677 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 00:48:28.708091 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:48:28.709001 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 00:48:28.710538 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 00:48:28.711676 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 00:48:28.711765 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 00:48:28.712846 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 00:48:28.712919 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 00:48:28.713992 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 00:48:28.714104 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 00:48:28.715469 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 00:48:28.715558 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 00:48:28.727189 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 00:48:28.728428 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 00:48:28.729036 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 00:48:28.729154 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:48:28.730498 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 00:48:28.730580 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 00:48:28.735893 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 00:48:28.736551 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 00:48:28.740742 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 00:48:28.743108 ignition[1001]: INFO : Ignition 2.19.0 Oct 9 00:48:28.743108 ignition[1001]: INFO : Stage: umount Oct 9 00:48:28.743108 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:48:28.743108 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:48:28.746023 ignition[1001]: INFO : umount: umount passed Oct 9 00:48:28.746704 ignition[1001]: INFO : Ignition finished successfully Oct 9 00:48:28.748834 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 00:48:28.748933 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 00:48:28.750496 systemd[1]: Stopped target network.target - Network. Oct 9 00:48:28.751573 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 00:48:28.751623 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 00:48:28.752728 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 00:48:28.752771 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 00:48:28.754614 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 00:48:28.754654 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 00:48:28.755578 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 00:48:28.755619 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 00:48:28.756539 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 00:48:28.757857 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 00:48:28.766129 systemd-networkd[765]: eth0: DHCPv6 lease lost Oct 9 00:48:28.767340 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 00:48:28.767437 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 00:48:28.768842 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 00:48:28.768942 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 00:48:28.771007 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 00:48:28.771064 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:48:28.775128 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 00:48:28.775794 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 00:48:28.775837 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 00:48:28.777258 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 00:48:28.777294 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:48:28.778612 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 00:48:28.778645 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 00:48:28.780208 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 00:48:28.780242 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:48:28.781639 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:48:28.789313 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 00:48:28.789414 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 00:48:28.797618 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 00:48:28.797747 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:48:28.800628 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 00:48:28.800665 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 00:48:28.801897 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 00:48:28.801940 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:48:28.803257 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 00:48:28.803297 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 00:48:28.805180 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 00:48:28.805220 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 00:48:28.807125 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 00:48:28.807169 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:48:28.816276 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 00:48:28.817038 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 00:48:28.817112 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:48:28.817976 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 00:48:28.818010 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:48:28.819553 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 00:48:28.819592 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:48:28.820965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:48:28.820998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:48:28.822826 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 00:48:28.824061 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 00:48:28.824927 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 00:48:28.825002 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 00:48:28.826878 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 00:48:28.827683 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 00:48:28.827732 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 00:48:28.837167 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 00:48:28.842096 systemd[1]: Switching root. Oct 9 00:48:28.870562 systemd-journald[237]: Journal stopped Oct 9 00:48:29.545477 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Oct 9 00:48:29.545530 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 00:48:29.545548 kernel: SELinux: policy capability open_perms=1 Oct 9 00:48:29.545557 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 00:48:29.545566 kernel: SELinux: policy capability always_check_network=0 Oct 9 00:48:29.545576 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 00:48:29.545585 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 00:48:29.545594 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 00:48:29.545604 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 00:48:29.545614 kernel: audit: type=1403 audit(1728434909.048:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 00:48:29.545626 systemd[1]: Successfully loaded SELinux policy in 32.665ms. Oct 9 00:48:29.545646 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.958ms. Oct 9 00:48:29.545657 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 00:48:29.545668 systemd[1]: Detected virtualization kvm. Oct 9 00:48:29.545678 systemd[1]: Detected architecture arm64. Oct 9 00:48:29.545688 systemd[1]: Detected first boot. Oct 9 00:48:29.545699 systemd[1]: Initializing machine ID from VM UUID. Oct 9 00:48:29.545710 zram_generator::config[1070]: No configuration found. Oct 9 00:48:29.545721 systemd[1]: Populated /etc with preset unit settings. Oct 9 00:48:29.545752 systemd[1]: Queued start job for default target multi-user.target. Oct 9 00:48:29.545767 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 00:48:29.545778 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 00:48:29.545789 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 00:48:29.545799 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 00:48:29.545810 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 00:48:29.545821 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 00:48:29.545831 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 00:48:29.545842 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 00:48:29.545854 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 00:48:29.545866 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:48:29.545878 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:48:29.545888 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 00:48:29.545898 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 00:48:29.545908 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 00:48:29.545919 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 00:48:29.545930 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 9 00:48:29.545942 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:48:29.545952 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 00:48:29.545962 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:48:29.545972 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 00:48:29.545982 systemd[1]: Reached target slices.target - Slice Units. Oct 9 00:48:29.545992 systemd[1]: Reached target swap.target - Swaps. Oct 9 00:48:29.546002 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 00:48:29.546013 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 00:48:29.546024 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 00:48:29.546034 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 00:48:29.546068 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:48:29.546081 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 00:48:29.546091 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:48:29.546101 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 00:48:29.546112 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 00:48:29.546124 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 00:48:29.546134 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 00:48:29.546144 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 00:48:29.546156 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 00:48:29.546167 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 00:48:29.546177 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 00:48:29.546187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:48:29.546197 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 00:48:29.546207 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 00:48:29.546217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:48:29.546227 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 00:48:29.546239 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:48:29.546249 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 00:48:29.546259 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:48:29.546269 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 00:48:29.546280 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 9 00:48:29.546291 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 9 00:48:29.546300 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 00:48:29.546310 kernel: loop: module loaded Oct 9 00:48:29.546319 kernel: fuse: init (API version 7.39) Oct 9 00:48:29.546330 kernel: ACPI: bus type drm_connector registered Oct 9 00:48:29.546340 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 00:48:29.546351 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 00:48:29.546361 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 00:48:29.546371 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 00:48:29.546382 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 00:48:29.546407 systemd-journald[1148]: Collecting audit messages is disabled. Oct 9 00:48:29.546428 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 00:48:29.546440 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 00:48:29.546451 systemd-journald[1148]: Journal started Oct 9 00:48:29.546470 systemd-journald[1148]: Runtime Journal (/run/log/journal/f07ee52e0c614a17980c5c5bbcafe0aa) is 5.9M, max 47.3M, 41.4M free. Oct 9 00:48:29.549066 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 00:48:29.549583 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 00:48:29.550454 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 00:48:29.551324 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 00:48:29.552274 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 00:48:29.553392 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:48:29.554467 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 00:48:29.554626 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 00:48:29.555683 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:48:29.555850 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:48:29.556982 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 00:48:29.557210 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 00:48:29.558167 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:48:29.558313 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:48:29.559354 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 00:48:29.559505 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 00:48:29.560483 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:48:29.560687 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:48:29.562055 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 00:48:29.563158 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 00:48:29.564457 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 00:48:29.574933 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 00:48:29.583180 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 00:48:29.584857 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 00:48:29.585687 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 00:48:29.589482 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 00:48:29.592244 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 00:48:29.593032 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:48:29.593977 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 00:48:29.594846 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:48:29.597208 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:48:29.601117 systemd-journald[1148]: Time spent on flushing to /var/log/journal/f07ee52e0c614a17980c5c5bbcafe0aa is 13.734ms for 846 entries. Oct 9 00:48:29.601117 systemd-journald[1148]: System Journal (/var/log/journal/f07ee52e0c614a17980c5c5bbcafe0aa) is 8.0M, max 195.6M, 187.6M free. Oct 9 00:48:29.628170 systemd-journald[1148]: Received client request to flush runtime journal. Oct 9 00:48:29.603282 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 00:48:29.607981 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:48:29.609062 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 00:48:29.609963 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 00:48:29.611217 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 00:48:29.614481 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:48:29.615935 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 00:48:29.630206 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Oct 9 00:48:29.630223 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Oct 9 00:48:29.630239 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 00:48:29.631508 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 00:48:29.638508 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:48:29.640724 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 00:48:29.641610 udevadm[1212]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 00:48:29.661831 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 00:48:29.669256 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 00:48:29.679160 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Oct 9 00:48:29.679173 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Oct 9 00:48:29.682516 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:48:30.010953 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 00:48:30.021168 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:48:30.038819 systemd-udevd[1233]: Using default interface naming scheme 'v255'. Oct 9 00:48:30.050506 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:48:30.057203 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 00:48:30.076192 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 00:48:30.077068 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1237) Oct 9 00:48:30.077805 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Oct 9 00:48:30.079076 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1237) Oct 9 00:48:30.099066 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1238) Oct 9 00:48:30.122701 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 00:48:30.144831 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 00:48:30.174985 systemd-networkd[1240]: lo: Link UP Oct 9 00:48:30.175148 systemd-networkd[1240]: lo: Gained carrier Oct 9 00:48:30.175850 systemd-networkd[1240]: Enumeration completed Oct 9 00:48:30.176248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:48:30.176761 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:48:30.176767 systemd-networkd[1240]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 00:48:30.177121 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 00:48:30.177332 systemd-networkd[1240]: eth0: Link UP Oct 9 00:48:30.177335 systemd-networkd[1240]: eth0: Gained carrier Oct 9 00:48:30.177346 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:48:30.179521 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 00:48:30.182963 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 00:48:30.185677 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 00:48:30.193162 systemd-networkd[1240]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 00:48:30.208227 lvm[1272]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 00:48:30.213423 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:48:30.230270 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 00:48:30.231315 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:48:30.252174 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 00:48:30.255315 lvm[1279]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 00:48:30.291409 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 00:48:30.292480 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 00:48:30.293367 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 00:48:30.293392 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 00:48:30.294081 systemd[1]: Reached target machines.target - Containers. Oct 9 00:48:30.295606 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 00:48:30.306160 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 00:48:30.307962 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 00:48:30.308810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:48:30.309685 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 00:48:30.311581 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 00:48:30.314273 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 00:48:30.318491 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 00:48:30.321194 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 00:48:30.328110 kernel: loop0: detected capacity change from 0 to 116808 Oct 9 00:48:30.330928 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 00:48:30.332339 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 00:48:30.337170 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 00:48:30.373066 kernel: loop1: detected capacity change from 0 to 113456 Oct 9 00:48:30.414068 kernel: loop2: detected capacity change from 0 to 194512 Oct 9 00:48:30.447090 kernel: loop3: detected capacity change from 0 to 116808 Oct 9 00:48:30.451058 kernel: loop4: detected capacity change from 0 to 113456 Oct 9 00:48:30.457065 kernel: loop5: detected capacity change from 0 to 194512 Oct 9 00:48:30.460876 (sd-merge)[1302]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 00:48:30.461251 (sd-merge)[1302]: Merged extensions into '/usr'. Oct 9 00:48:30.465097 systemd[1]: Reloading requested from client PID 1289 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 00:48:30.465208 systemd[1]: Reloading... Oct 9 00:48:30.509072 zram_generator::config[1330]: No configuration found. Oct 9 00:48:30.533521 ldconfig[1285]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 00:48:30.600081 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:48:30.641608 systemd[1]: Reloading finished in 176 ms. Oct 9 00:48:30.656617 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 00:48:30.657798 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 00:48:30.669168 systemd[1]: Starting ensure-sysext.service... Oct 9 00:48:30.670786 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 00:48:30.675803 systemd[1]: Reloading requested from client PID 1371 ('systemctl') (unit ensure-sysext.service)... Oct 9 00:48:30.675818 systemd[1]: Reloading... Oct 9 00:48:30.687677 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 00:48:30.687945 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 00:48:30.688572 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 00:48:30.688798 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Oct 9 00:48:30.688849 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Oct 9 00:48:30.691106 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 00:48:30.691119 systemd-tmpfiles[1382]: Skipping /boot Oct 9 00:48:30.697962 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 00:48:30.697977 systemd-tmpfiles[1382]: Skipping /boot Oct 9 00:48:30.721192 zram_generator::config[1408]: No configuration found. Oct 9 00:48:30.811570 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:48:30.853942 systemd[1]: Reloading finished in 177 ms. Oct 9 00:48:30.867935 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:48:30.887304 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:48:30.889609 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 00:48:30.891854 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 00:48:30.897352 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 00:48:30.900184 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 00:48:30.906390 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:48:30.911336 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:48:30.913602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:48:30.918362 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:48:30.919318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:48:30.920012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:48:30.920188 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:48:30.921602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:48:30.921796 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:48:30.925373 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 00:48:30.927017 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:48:30.927283 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:48:30.932596 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:48:30.932869 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:48:30.939608 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 00:48:30.942855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:48:30.945929 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:48:30.952325 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:48:30.956393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:48:30.960694 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:48:30.962227 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 00:48:30.963652 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 00:48:30.967764 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 00:48:30.968177 augenrules[1498]: No rules Oct 9 00:48:30.968950 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:48:30.969115 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:48:30.970504 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:48:30.970705 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:48:30.971805 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:48:30.971938 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:48:30.973267 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:48:30.973449 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:48:30.992312 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:48:30.993104 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:48:30.994376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:48:30.997319 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 00:48:31.001618 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:48:31.004486 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:48:31.005323 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:48:31.005449 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 00:48:31.006361 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:48:31.006521 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:48:31.008569 systemd-resolved[1456]: Positive Trust Anchors: Oct 9 00:48:31.008647 systemd-resolved[1456]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 00:48:31.008682 systemd-resolved[1456]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 00:48:31.010028 systemd[1]: Finished ensure-sysext.service. Oct 9 00:48:31.015409 systemd-resolved[1456]: Defaulting to hostname 'linux'. Oct 9 00:48:31.019564 augenrules[1514]: /sbin/augenrules: No change Oct 9 00:48:31.017751 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 00:48:31.017924 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 00:48:31.019053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:48:31.019205 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:48:31.020366 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 00:48:31.021356 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:48:31.021570 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:48:31.024314 augenrules[1542]: No rules Oct 9 00:48:31.025055 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:48:31.025284 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:48:31.026928 systemd[1]: Reached target network.target - Network. Oct 9 00:48:31.027780 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:48:31.028632 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:48:31.028689 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:48:31.036229 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 00:48:31.078672 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 00:48:31.079523 systemd-timesyncd[1553]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 00:48:31.079572 systemd-timesyncd[1553]: Initial clock synchronization to Wed 2024-10-09 00:48:31.474259 UTC. Oct 9 00:48:31.079995 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 00:48:31.080880 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 00:48:31.081799 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 00:48:31.082693 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 00:48:31.083588 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 00:48:31.083621 systemd[1]: Reached target paths.target - Path Units. Oct 9 00:48:31.084263 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 00:48:31.085141 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 00:48:31.085999 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 00:48:31.086911 systemd[1]: Reached target timers.target - Timer Units. Oct 9 00:48:31.088306 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 00:48:31.090538 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 00:48:31.092579 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 00:48:31.098068 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 00:48:31.098852 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 00:48:31.099548 systemd[1]: Reached target basic.target - Basic System. Oct 9 00:48:31.100424 systemd[1]: System is tainted: cgroupsv1 Oct 9 00:48:31.100474 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 00:48:31.100494 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 00:48:31.101714 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 00:48:31.103592 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 00:48:31.105469 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 00:48:31.107970 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 00:48:31.110124 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 00:48:31.112759 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 00:48:31.119484 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 00:48:31.124241 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 00:48:31.125455 jq[1565]: false Oct 9 00:48:31.128283 extend-filesystems[1567]: Found loop3 Oct 9 00:48:31.130937 extend-filesystems[1567]: Found loop4 Oct 9 00:48:31.130937 extend-filesystems[1567]: Found loop5 Oct 9 00:48:31.130937 extend-filesystems[1567]: Found vda Oct 9 00:48:31.130937 extend-filesystems[1567]: Found vda1 Oct 9 00:48:31.130937 extend-filesystems[1567]: Found vda2 Oct 9 00:48:31.130937 extend-filesystems[1567]: Found vda3 Oct 9 00:48:31.130937 extend-filesystems[1567]: Found usr Oct 9 00:48:31.130937 extend-filesystems[1567]: Found vda4 Oct 9 00:48:31.130937 extend-filesystems[1567]: Found vda6 Oct 9 00:48:31.130937 extend-filesystems[1567]: Found vda7 Oct 9 00:48:31.130937 extend-filesystems[1567]: Found vda9 Oct 9 00:48:31.130937 extend-filesystems[1567]: Checking size of /dev/vda9 Oct 9 00:48:31.128961 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 00:48:31.149892 extend-filesystems[1567]: Resized partition /dev/vda9 Oct 9 00:48:31.134006 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 00:48:31.145002 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 00:48:31.153470 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 00:48:31.156000 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 00:48:31.163123 extend-filesystems[1590]: resize2fs 1.47.1 (20-May-2024) Oct 9 00:48:31.160285 dbus-daemon[1564]: [system] SELinux support is enabled Oct 9 00:48:31.161333 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 00:48:31.166073 jq[1591]: true Oct 9 00:48:31.173451 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 00:48:31.173476 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1252) Oct 9 00:48:31.176210 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 00:48:31.176462 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 00:48:31.176741 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 00:48:31.176969 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 00:48:31.194143 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 00:48:31.194388 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 00:48:31.208435 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (Power Button) Oct 9 00:48:31.209291 systemd-logind[1577]: New seat seat0. Oct 9 00:48:31.210950 update_engine[1589]: I20241009 00:48:31.210757 1589 main.cc:92] Flatcar Update Engine starting Oct 9 00:48:31.214686 update_engine[1589]: I20241009 00:48:31.214641 1589 update_check_scheduler.cc:74] Next update check in 5m9s Oct 9 00:48:31.217190 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 00:48:31.218130 (ntainerd)[1604]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 00:48:31.219317 dbus-daemon[1564]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 9 00:48:31.220155 tar[1597]: linux-arm64/helm Oct 9 00:48:31.223006 jq[1603]: true Oct 9 00:48:31.225070 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 00:48:31.237212 extend-filesystems[1590]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 00:48:31.237212 extend-filesystems[1590]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 00:48:31.237212 extend-filesystems[1590]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 00:48:31.240876 extend-filesystems[1567]: Resized filesystem in /dev/vda9 Oct 9 00:48:31.239926 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 00:48:31.240190 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 00:48:31.242919 systemd[1]: Started update-engine.service - Update Engine. Oct 9 00:48:31.246035 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 00:48:31.246223 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 00:48:31.247966 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 00:48:31.248113 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 00:48:31.250275 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 00:48:31.251324 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 00:48:31.289324 bash[1632]: Updated "/home/core/.ssh/authorized_keys" Oct 9 00:48:31.290871 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 00:48:31.293738 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 00:48:31.331869 locksmithd[1619]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 00:48:31.415607 containerd[1604]: time="2024-10-09T00:48:31.415472920Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 00:48:31.442453 containerd[1604]: time="2024-10-09T00:48:31.442400840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:48:31.444063 containerd[1604]: time="2024-10-09T00:48:31.444012040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:48:31.444157 containerd[1604]: time="2024-10-09T00:48:31.444143040Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 00:48:31.444238 containerd[1604]: time="2024-10-09T00:48:31.444223520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 00:48:31.444438 containerd[1604]: time="2024-10-09T00:48:31.444418480Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 00:48:31.445584 containerd[1604]: time="2024-10-09T00:48:31.444490720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 00:48:31.445584 containerd[1604]: time="2024-10-09T00:48:31.444562840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:48:31.445584 containerd[1604]: time="2024-10-09T00:48:31.444577640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:48:31.445584 containerd[1604]: time="2024-10-09T00:48:31.444795640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:48:31.445584 containerd[1604]: time="2024-10-09T00:48:31.444811160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 00:48:31.445584 containerd[1604]: time="2024-10-09T00:48:31.444823720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:48:31.445584 containerd[1604]: time="2024-10-09T00:48:31.444832920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 00:48:31.445584 containerd[1604]: time="2024-10-09T00:48:31.444904120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:48:31.445584 containerd[1604]: time="2024-10-09T00:48:31.445110400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:48:31.445584 containerd[1604]: time="2024-10-09T00:48:31.445234960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:48:31.445584 containerd[1604]: time="2024-10-09T00:48:31.445248240Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 00:48:31.445862 containerd[1604]: time="2024-10-09T00:48:31.445323440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 00:48:31.445862 containerd[1604]: time="2024-10-09T00:48:31.445370440Z" level=info msg="metadata content store policy set" policy=shared Oct 9 00:48:31.449109 containerd[1604]: time="2024-10-09T00:48:31.449081080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 00:48:31.449258 containerd[1604]: time="2024-10-09T00:48:31.449218200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 00:48:31.449386 containerd[1604]: time="2024-10-09T00:48:31.449373000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 00:48:31.449445 containerd[1604]: time="2024-10-09T00:48:31.449434040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 00:48:31.449496 containerd[1604]: time="2024-10-09T00:48:31.449485440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 00:48:31.449704 containerd[1604]: time="2024-10-09T00:48:31.449684760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 00:48:31.450129 containerd[1604]: time="2024-10-09T00:48:31.450107160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 00:48:31.450329 containerd[1604]: time="2024-10-09T00:48:31.450309040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 00:48:31.450393 containerd[1604]: time="2024-10-09T00:48:31.450381040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 00:48:31.450445 containerd[1604]: time="2024-10-09T00:48:31.450433800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 00:48:31.450497 containerd[1604]: time="2024-10-09T00:48:31.450485920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 00:48:31.450569 containerd[1604]: time="2024-10-09T00:48:31.450555280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 00:48:31.450619 containerd[1604]: time="2024-10-09T00:48:31.450608200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 00:48:31.450674 containerd[1604]: time="2024-10-09T00:48:31.450661480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 00:48:31.450744 containerd[1604]: time="2024-10-09T00:48:31.450717720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 00:48:31.450796 containerd[1604]: time="2024-10-09T00:48:31.450785160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 00:48:31.450847 containerd[1604]: time="2024-10-09T00:48:31.450835600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 00:48:31.450895 containerd[1604]: time="2024-10-09T00:48:31.450884320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 00:48:31.450971 containerd[1604]: time="2024-10-09T00:48:31.450957400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451022 containerd[1604]: time="2024-10-09T00:48:31.451011680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451091 containerd[1604]: time="2024-10-09T00:48:31.451078960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451161 containerd[1604]: time="2024-10-09T00:48:31.451147960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451221 containerd[1604]: time="2024-10-09T00:48:31.451208480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451273 containerd[1604]: time="2024-10-09T00:48:31.451262200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451322 containerd[1604]: time="2024-10-09T00:48:31.451311280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451384 containerd[1604]: time="2024-10-09T00:48:31.451371800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451445 containerd[1604]: time="2024-10-09T00:48:31.451433200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451498 containerd[1604]: time="2024-10-09T00:48:31.451486960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451559 containerd[1604]: time="2024-10-09T00:48:31.451547360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451625 containerd[1604]: time="2024-10-09T00:48:31.451605720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451683 containerd[1604]: time="2024-10-09T00:48:31.451671560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451751 containerd[1604]: time="2024-10-09T00:48:31.451738120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 00:48:31.451819 containerd[1604]: time="2024-10-09T00:48:31.451806360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451873 containerd[1604]: time="2024-10-09T00:48:31.451861280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.451920 containerd[1604]: time="2024-10-09T00:48:31.451909200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 00:48:31.452109 containerd[1604]: time="2024-10-09T00:48:31.452091800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 00:48:31.453999 containerd[1604]: time="2024-10-09T00:48:31.453972840Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 00:48:31.454086 containerd[1604]: time="2024-10-09T00:48:31.454070600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 00:48:31.454143 containerd[1604]: time="2024-10-09T00:48:31.454129520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 00:48:31.454194 containerd[1604]: time="2024-10-09T00:48:31.454181800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.454263 containerd[1604]: time="2024-10-09T00:48:31.454250840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 00:48:31.457070 containerd[1604]: time="2024-10-09T00:48:31.454307320Z" level=info msg="NRI interface is disabled by configuration." Oct 9 00:48:31.457070 containerd[1604]: time="2024-10-09T00:48:31.454324800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 00:48:31.457148 containerd[1604]: time="2024-10-09T00:48:31.454662000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 00:48:31.457148 containerd[1604]: time="2024-10-09T00:48:31.454710840Z" level=info msg="Connect containerd service" Oct 9 00:48:31.457148 containerd[1604]: time="2024-10-09T00:48:31.454754360Z" level=info msg="using legacy CRI server" Oct 9 00:48:31.457148 containerd[1604]: time="2024-10-09T00:48:31.454764240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 00:48:31.457148 containerd[1604]: time="2024-10-09T00:48:31.454847920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 00:48:31.457148 containerd[1604]: time="2024-10-09T00:48:31.456076800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 00:48:31.457558 containerd[1604]: time="2024-10-09T00:48:31.457525640Z" level=info msg="Start subscribing containerd event" Oct 9 00:48:31.457637 containerd[1604]: time="2024-10-09T00:48:31.457617560Z" level=info msg="Start recovering state" Oct 9 00:48:31.457765 containerd[1604]: time="2024-10-09T00:48:31.457748960Z" level=info msg="Start event monitor" Oct 9 00:48:31.457827 containerd[1604]: time="2024-10-09T00:48:31.457813840Z" level=info msg="Start snapshots syncer" Oct 9 00:48:31.457885 containerd[1604]: time="2024-10-09T00:48:31.457872600Z" level=info msg="Start cni network conf syncer for default" Oct 9 00:48:31.457938 containerd[1604]: time="2024-10-09T00:48:31.457926080Z" level=info msg="Start streaming server" Oct 9 00:48:31.458527 containerd[1604]: time="2024-10-09T00:48:31.458499880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 00:48:31.466188 containerd[1604]: time="2024-10-09T00:48:31.466138760Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 00:48:31.466355 containerd[1604]: time="2024-10-09T00:48:31.466338600Z" level=info msg="containerd successfully booted in 0.053705s" Oct 9 00:48:31.466562 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 00:48:31.570885 tar[1597]: linux-arm64/LICENSE Oct 9 00:48:31.571112 tar[1597]: linux-arm64/README.md Oct 9 00:48:31.580968 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 00:48:31.681864 sshd_keygen[1593]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 00:48:31.701143 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 00:48:31.710422 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 00:48:31.715904 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 00:48:31.716234 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 00:48:31.718659 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 00:48:31.732838 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 00:48:31.747464 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 00:48:31.749582 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 9 00:48:31.750604 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 00:48:31.905262 systemd-networkd[1240]: eth0: Gained IPv6LL Oct 9 00:48:31.907740 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 00:48:31.909250 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 00:48:31.921328 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 00:48:31.923696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:48:31.925772 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 00:48:31.944142 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 00:48:31.944508 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 00:48:31.946104 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 00:48:31.947518 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 00:48:32.421836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:48:32.423224 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 00:48:32.426165 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:48:32.429284 systemd[1]: Startup finished in 4.885s (kernel) + 3.414s (userspace) = 8.300s. Oct 9 00:48:32.919120 kubelet[1706]: E1009 00:48:32.918955 1706 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:48:32.921726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:48:32.921917 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:48:37.977948 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 00:48:37.997331 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:49244.service - OpenSSH per-connection server daemon (10.0.0.1:49244). Oct 9 00:48:38.047217 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 49244 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:48:38.049021 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:48:38.061927 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 00:48:38.073404 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 00:48:38.075442 systemd-logind[1577]: New session 1 of user core. Oct 9 00:48:38.083376 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 00:48:38.085581 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 00:48:38.092693 (systemd)[1727]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 00:48:38.165589 systemd[1727]: Queued start job for default target default.target. Oct 9 00:48:38.165973 systemd[1727]: Created slice app.slice - User Application Slice. Oct 9 00:48:38.165998 systemd[1727]: Reached target paths.target - Paths. Oct 9 00:48:38.166009 systemd[1727]: Reached target timers.target - Timers. Oct 9 00:48:38.175188 systemd[1727]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 00:48:38.183183 systemd[1727]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 00:48:38.183249 systemd[1727]: Reached target sockets.target - Sockets. Oct 9 00:48:38.183262 systemd[1727]: Reached target basic.target - Basic System. Oct 9 00:48:38.183301 systemd[1727]: Reached target default.target - Main User Target. Oct 9 00:48:38.183326 systemd[1727]: Startup finished in 84ms. Oct 9 00:48:38.183657 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 00:48:38.185366 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 00:48:38.247382 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:49258.service - OpenSSH per-connection server daemon (10.0.0.1:49258). Oct 9 00:48:38.287125 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 49258 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:48:38.288496 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:48:38.292811 systemd-logind[1577]: New session 2 of user core. Oct 9 00:48:38.305426 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 00:48:38.361290 sshd[1739]: pam_unix(sshd:session): session closed for user core Oct 9 00:48:38.369354 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:49274.service - OpenSSH per-connection server daemon (10.0.0.1:49274). Oct 9 00:48:38.369745 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:49258.service: Deactivated successfully. Oct 9 00:48:38.371709 systemd-logind[1577]: Session 2 logged out. Waiting for processes to exit. Oct 9 00:48:38.372272 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 00:48:38.373755 systemd-logind[1577]: Removed session 2. Oct 9 00:48:38.400767 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 49274 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:48:38.402117 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:48:38.406183 systemd-logind[1577]: New session 3 of user core. Oct 9 00:48:38.414394 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 00:48:38.464722 sshd[1744]: pam_unix(sshd:session): session closed for user core Oct 9 00:48:38.475439 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:49288.service - OpenSSH per-connection server daemon (10.0.0.1:49288). Oct 9 00:48:38.475845 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:49274.service: Deactivated successfully. Oct 9 00:48:38.478578 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Oct 9 00:48:38.478755 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 00:48:38.479903 systemd-logind[1577]: Removed session 3. Oct 9 00:48:38.506595 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 49288 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:48:38.507995 sshd[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:48:38.512093 systemd-logind[1577]: New session 4 of user core. Oct 9 00:48:38.520420 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 00:48:38.575629 sshd[1752]: pam_unix(sshd:session): session closed for user core Oct 9 00:48:38.593328 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:49304.service - OpenSSH per-connection server daemon (10.0.0.1:49304). Oct 9 00:48:38.593888 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:49288.service: Deactivated successfully. Oct 9 00:48:38.596328 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 00:48:38.596494 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Oct 9 00:48:38.597938 systemd-logind[1577]: Removed session 4. Oct 9 00:48:38.624943 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 49304 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:48:38.626270 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:48:38.630581 systemd-logind[1577]: New session 5 of user core. Oct 9 00:48:38.644382 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 00:48:38.706429 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 00:48:38.706718 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:48:38.722186 sudo[1767]: pam_unix(sudo:session): session closed for user root Oct 9 00:48:38.724093 sshd[1760]: pam_unix(sshd:session): session closed for user core Oct 9 00:48:38.734329 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:49316.service - OpenSSH per-connection server daemon (10.0.0.1:49316). Oct 9 00:48:38.734735 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:49304.service: Deactivated successfully. Oct 9 00:48:38.736573 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Oct 9 00:48:38.737114 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 00:48:38.738674 systemd-logind[1577]: Removed session 5. Oct 9 00:48:38.765860 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 49316 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:48:38.767169 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:48:38.771324 systemd-logind[1577]: New session 6 of user core. Oct 9 00:48:38.784367 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 00:48:38.836655 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 00:48:38.836945 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:48:38.840549 sudo[1777]: pam_unix(sudo:session): session closed for user root Oct 9 00:48:38.845475 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 00:48:38.845763 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:48:38.863369 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:48:38.887756 augenrules[1799]: No rules Oct 9 00:48:38.888896 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:48:38.889195 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:48:38.890629 sudo[1776]: pam_unix(sudo:session): session closed for user root Oct 9 00:48:38.892735 sshd[1769]: pam_unix(sshd:session): session closed for user core Oct 9 00:48:38.902448 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:49318.service - OpenSSH per-connection server daemon (10.0.0.1:49318). Oct 9 00:48:38.902853 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:49316.service: Deactivated successfully. Oct 9 00:48:38.904769 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Oct 9 00:48:38.905397 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 00:48:38.906647 systemd-logind[1577]: Removed session 6. Oct 9 00:48:38.935655 sshd[1805]: Accepted publickey for core from 10.0.0.1 port 49318 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:48:38.936935 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:48:38.941133 systemd-logind[1577]: New session 7 of user core. Oct 9 00:48:38.950533 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 00:48:39.001919 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 00:48:39.002241 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:48:39.324326 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 00:48:39.324561 (dockerd)[1833]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 00:48:39.573919 dockerd[1833]: time="2024-10-09T00:48:39.573852182Z" level=info msg="Starting up" Oct 9 00:48:39.870835 dockerd[1833]: time="2024-10-09T00:48:39.870785577Z" level=info msg="Loading containers: start." Oct 9 00:48:40.012083 kernel: Initializing XFRM netlink socket Oct 9 00:48:40.085394 systemd-networkd[1240]: docker0: Link UP Oct 9 00:48:40.123540 dockerd[1833]: time="2024-10-09T00:48:40.123396792Z" level=info msg="Loading containers: done." Oct 9 00:48:40.139085 dockerd[1833]: time="2024-10-09T00:48:40.139013244Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 00:48:40.139233 dockerd[1833]: time="2024-10-09T00:48:40.139145002Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 00:48:40.139278 dockerd[1833]: time="2024-10-09T00:48:40.139258087Z" level=info msg="Daemon has completed initialization" Oct 9 00:48:40.167889 dockerd[1833]: time="2024-10-09T00:48:40.167681910Z" level=info msg="API listen on /run/docker.sock" Oct 9 00:48:40.168126 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 00:48:40.730283 containerd[1604]: time="2024-10-09T00:48:40.730249385Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 00:48:41.417411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1894760819.mount: Deactivated successfully. Oct 9 00:48:42.312185 containerd[1604]: time="2024-10-09T00:48:42.312137571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:42.313121 containerd[1604]: time="2024-10-09T00:48:42.312821821Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=32286060" Oct 9 00:48:42.313842 containerd[1604]: time="2024-10-09T00:48:42.313782656Z" level=info msg="ImageCreate event name:\"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:42.319490 containerd[1604]: time="2024-10-09T00:48:42.317508735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:42.319490 containerd[1604]: time="2024-10-09T00:48:42.319320841Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"32282858\" in 1.589033096s" Oct 9 00:48:42.319490 containerd[1604]: time="2024-10-09T00:48:42.319361158Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\"" Oct 9 00:48:42.337533 containerd[1604]: time="2024-10-09T00:48:42.337497983Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 00:48:43.172186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 00:48:43.185523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:48:43.293441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:48:43.298403 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:48:43.349404 kubelet[2114]: E1009 00:48:43.349325 2114 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:48:43.353322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:48:43.353473 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:48:43.718523 containerd[1604]: time="2024-10-09T00:48:43.718447136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:43.718931 containerd[1604]: time="2024-10-09T00:48:43.718851559Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=29374206" Oct 9 00:48:43.719802 containerd[1604]: time="2024-10-09T00:48:43.719753288Z" level=info msg="ImageCreate event name:\"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:43.722910 containerd[1604]: time="2024-10-09T00:48:43.722877017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:43.724279 containerd[1604]: time="2024-10-09T00:48:43.724141218Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"30862018\" in 1.386508647s" Oct 9 00:48:43.724279 containerd[1604]: time="2024-10-09T00:48:43.724183291Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\"" Oct 9 00:48:43.743118 containerd[1604]: time="2024-10-09T00:48:43.743077371Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 00:48:44.714839 containerd[1604]: time="2024-10-09T00:48:44.714776873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:44.715749 containerd[1604]: time="2024-10-09T00:48:44.715699919Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=15751219" Oct 9 00:48:44.716611 containerd[1604]: time="2024-10-09T00:48:44.716538412Z" level=info msg="ImageCreate event name:\"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:44.720123 containerd[1604]: time="2024-10-09T00:48:44.720059025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:44.721489 containerd[1604]: time="2024-10-09T00:48:44.721287315Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"17239049\" in 978.170311ms" Oct 9 00:48:44.721489 containerd[1604]: time="2024-10-09T00:48:44.721321451Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\"" Oct 9 00:48:44.740248 containerd[1604]: time="2024-10-09T00:48:44.740214532Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 00:48:45.714484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742412493.mount: Deactivated successfully. Oct 9 00:48:46.036108 containerd[1604]: time="2024-10-09T00:48:46.035952910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:46.036677 containerd[1604]: time="2024-10-09T00:48:46.036621076Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=25254040" Oct 9 00:48:46.037535 containerd[1604]: time="2024-10-09T00:48:46.037506686Z" level=info msg="ImageCreate event name:\"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:46.040333 containerd[1604]: time="2024-10-09T00:48:46.040291521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:46.040957 containerd[1604]: time="2024-10-09T00:48:46.040921479Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"25253057\" in 1.300669477s" Oct 9 00:48:46.040957 containerd[1604]: time="2024-10-09T00:48:46.040954851Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\"" Oct 9 00:48:46.058980 containerd[1604]: time="2024-10-09T00:48:46.058942800Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 00:48:46.618639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4169274962.mount: Deactivated successfully. Oct 9 00:48:47.124009 containerd[1604]: time="2024-10-09T00:48:47.123680699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:47.124447 containerd[1604]: time="2024-10-09T00:48:47.124398205Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 9 00:48:47.125177 containerd[1604]: time="2024-10-09T00:48:47.125126141Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:47.129018 containerd[1604]: time="2024-10-09T00:48:47.128957608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:47.130208 containerd[1604]: time="2024-10-09T00:48:47.130177558Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.071195066s" Oct 9 00:48:47.130487 containerd[1604]: time="2024-10-09T00:48:47.130288170Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 9 00:48:47.148663 containerd[1604]: time="2024-10-09T00:48:47.148632807Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 00:48:47.573906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660230955.mount: Deactivated successfully. Oct 9 00:48:47.579526 containerd[1604]: time="2024-10-09T00:48:47.578759874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:47.579526 containerd[1604]: time="2024-10-09T00:48:47.579408686Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Oct 9 00:48:47.580084 containerd[1604]: time="2024-10-09T00:48:47.580029634Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:47.582214 containerd[1604]: time="2024-10-09T00:48:47.582169390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:47.583141 containerd[1604]: time="2024-10-09T00:48:47.583115005Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 434.44479ms" Oct 9 00:48:47.583343 containerd[1604]: time="2024-10-09T00:48:47.583212168Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 9 00:48:47.600948 containerd[1604]: time="2024-10-09T00:48:47.600916771Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 00:48:48.100223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount977028536.mount: Deactivated successfully. Oct 9 00:48:49.414113 containerd[1604]: time="2024-10-09T00:48:49.414063489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:49.415037 containerd[1604]: time="2024-10-09T00:48:49.414661081Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Oct 9 00:48:49.415805 containerd[1604]: time="2024-10-09T00:48:49.415768338Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:49.419224 containerd[1604]: time="2024-10-09T00:48:49.419181532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:48:49.420576 containerd[1604]: time="2024-10-09T00:48:49.420527721Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 1.819575005s" Oct 9 00:48:49.420576 containerd[1604]: time="2024-10-09T00:48:49.420560166Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Oct 9 00:48:53.604571 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 00:48:53.615398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:48:53.818469 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 00:48:53.818614 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 00:48:53.818899 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:48:53.825422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:48:53.842126 systemd[1]: Reloading requested from client PID 2351 ('systemctl') (unit session-7.scope)... Oct 9 00:48:53.842143 systemd[1]: Reloading... Oct 9 00:48:53.901113 zram_generator::config[2390]: No configuration found. Oct 9 00:48:54.046753 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:48:54.094474 systemd[1]: Reloading finished in 252 ms. Oct 9 00:48:54.124834 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 00:48:54.124899 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 00:48:54.125151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:48:54.127303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:48:54.221463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:48:54.225056 (kubelet)[2448]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 00:48:54.265347 kubelet[2448]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:48:54.265347 kubelet[2448]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 00:48:54.265347 kubelet[2448]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:48:54.265347 kubelet[2448]: I1009 00:48:54.264023 2448 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 00:48:55.580160 kubelet[2448]: I1009 00:48:55.578339 2448 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 00:48:55.580160 kubelet[2448]: I1009 00:48:55.578374 2448 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 00:48:55.580160 kubelet[2448]: I1009 00:48:55.578861 2448 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 00:48:55.605838 kubelet[2448]: E1009 00:48:55.605796 2448 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:55.606101 kubelet[2448]: I1009 00:48:55.605884 2448 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 00:48:55.613366 kubelet[2448]: I1009 00:48:55.613338 2448 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 00:48:55.614404 kubelet[2448]: I1009 00:48:55.614383 2448 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 00:48:55.614613 kubelet[2448]: I1009 00:48:55.614598 2448 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 00:48:55.614698 kubelet[2448]: I1009 00:48:55.614621 2448 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 00:48:55.614698 kubelet[2448]: I1009 00:48:55.614631 2448 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 00:48:55.614753 kubelet[2448]: I1009 00:48:55.614739 2448 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:48:55.616770 kubelet[2448]: I1009 00:48:55.616743 2448 kubelet.go:396] "Attempting to sync node with API server" Oct 9 00:48:55.616770 kubelet[2448]: I1009 00:48:55.616770 2448 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 00:48:55.617268 kubelet[2448]: I1009 00:48:55.617153 2448 kubelet.go:312] "Adding apiserver pod source" Oct 9 00:48:55.617268 kubelet[2448]: I1009 00:48:55.617184 2448 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 00:48:55.617499 kubelet[2448]: W1009 00:48:55.617455 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:55.617575 kubelet[2448]: E1009 00:48:55.617563 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:55.617749 kubelet[2448]: W1009 00:48:55.617694 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:55.617749 kubelet[2448]: E1009 00:48:55.617742 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:55.619512 kubelet[2448]: I1009 00:48:55.619273 2448 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 00:48:55.619917 kubelet[2448]: I1009 00:48:55.619863 2448 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 00:48:55.626134 kubelet[2448]: W1009 00:48:55.626100 2448 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 00:48:55.628776 kubelet[2448]: I1009 00:48:55.628752 2448 server.go:1256] "Started kubelet" Oct 9 00:48:55.629298 kubelet[2448]: I1009 00:48:55.629032 2448 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 00:48:55.632388 kubelet[2448]: I1009 00:48:55.629208 2448 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 00:48:55.632602 kubelet[2448]: I1009 00:48:55.632578 2448 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 00:48:55.634208 kubelet[2448]: I1009 00:48:55.634187 2448 server.go:461] "Adding debug handlers to kubelet server" Oct 9 00:48:55.639413 kubelet[2448]: I1009 00:48:55.639398 2448 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 00:48:55.639810 kubelet[2448]: I1009 00:48:55.639791 2448 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 00:48:55.640225 kubelet[2448]: I1009 00:48:55.640205 2448 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 00:48:55.640396 kubelet[2448]: I1009 00:48:55.640360 2448 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 00:48:55.640638 kubelet[2448]: E1009 00:48:55.640561 2448 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fca26a938694d4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 00:48:55.628731604 +0000 UTC m=+1.400440378,LastTimestamp:2024-10-09 00:48:55.628731604 +0000 UTC m=+1.400440378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 00:48:55.640739 kubelet[2448]: E1009 00:48:55.640672 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="200ms" Oct 9 00:48:55.640896 kubelet[2448]: I1009 00:48:55.640873 2448 factory.go:221] Registration of the systemd container factory successfully Oct 9 00:48:55.641406 kubelet[2448]: I1009 00:48:55.640970 2448 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 00:48:55.641406 kubelet[2448]: W1009 00:48:55.641097 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:55.641406 kubelet[2448]: E1009 00:48:55.641139 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:55.642651 kubelet[2448]: I1009 00:48:55.642630 2448 factory.go:221] Registration of the containerd container factory successfully Oct 9 00:48:55.654580 kubelet[2448]: I1009 00:48:55.654543 2448 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 00:48:55.655425 kubelet[2448]: I1009 00:48:55.655403 2448 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 00:48:55.655425 kubelet[2448]: I1009 00:48:55.655423 2448 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 00:48:55.655667 kubelet[2448]: I1009 00:48:55.655438 2448 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 00:48:55.655667 kubelet[2448]: E1009 00:48:55.655482 2448 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 00:48:55.658360 kubelet[2448]: W1009 00:48:55.657417 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:55.658360 kubelet[2448]: E1009 00:48:55.657455 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:55.660973 kubelet[2448]: I1009 00:48:55.660952 2448 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 00:48:55.660973 kubelet[2448]: I1009 00:48:55.660972 2448 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 00:48:55.661099 kubelet[2448]: I1009 00:48:55.660988 2448 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:48:55.662929 kubelet[2448]: I1009 00:48:55.662902 2448 policy_none.go:49] "None policy: Start" Oct 9 00:48:55.663604 kubelet[2448]: I1009 00:48:55.663574 2448 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 00:48:55.663637 kubelet[2448]: I1009 00:48:55.663621 2448 state_mem.go:35] "Initializing new in-memory state store" Oct 9 00:48:55.668880 kubelet[2448]: I1009 00:48:55.668842 2448 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 00:48:55.669141 kubelet[2448]: I1009 00:48:55.669123 2448 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 00:48:55.670406 kubelet[2448]: E1009 00:48:55.670388 2448 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 00:48:55.741935 kubelet[2448]: I1009 00:48:55.741911 2448 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:48:55.742367 kubelet[2448]: E1009 00:48:55.742349 2448 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Oct 9 00:48:55.756589 kubelet[2448]: I1009 00:48:55.756565 2448 topology_manager.go:215] "Topology Admit Handler" podUID="04c4298a874e765af2f6a110a5eefb7e" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 00:48:55.757410 kubelet[2448]: I1009 00:48:55.757375 2448 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 00:48:55.758456 kubelet[2448]: I1009 00:48:55.758385 2448 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 00:48:55.841358 kubelet[2448]: E1009 00:48:55.841250 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="400ms" Oct 9 00:48:55.941910 kubelet[2448]: I1009 00:48:55.941869 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04c4298a874e765af2f6a110a5eefb7e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"04c4298a874e765af2f6a110a5eefb7e\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:48:55.941982 kubelet[2448]: I1009 00:48:55.941917 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04c4298a874e765af2f6a110a5eefb7e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"04c4298a874e765af2f6a110a5eefb7e\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:48:55.941982 kubelet[2448]: I1009 00:48:55.941969 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04c4298a874e765af2f6a110a5eefb7e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"04c4298a874e765af2f6a110a5eefb7e\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:48:55.942028 kubelet[2448]: I1009 00:48:55.941993 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:48:55.942028 kubelet[2448]: I1009 00:48:55.942021 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 00:48:55.942121 kubelet[2448]: I1009 00:48:55.942058 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:48:55.942121 kubelet[2448]: I1009 00:48:55.942086 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:48:55.942121 kubelet[2448]: I1009 00:48:55.942106 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:48:55.942193 kubelet[2448]: I1009 00:48:55.942128 2448 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:48:55.943922 kubelet[2448]: I1009 00:48:55.943888 2448 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:48:55.944243 kubelet[2448]: E1009 00:48:55.944227 2448 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Oct 9 00:48:56.061931 kubelet[2448]: E1009 00:48:56.061870 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:48:56.062253 kubelet[2448]: E1009 00:48:56.062224 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:48:56.062610 kubelet[2448]: E1009 00:48:56.062586 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:48:56.062744 containerd[1604]: time="2024-10-09T00:48:56.062646822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 9 00:48:56.063090 containerd[1604]: time="2024-10-09T00:48:56.062897523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:04c4298a874e765af2f6a110a5eefb7e,Namespace:kube-system,Attempt:0,}" Oct 9 00:48:56.063307 containerd[1604]: time="2024-10-09T00:48:56.063162733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 9 00:48:56.242581 kubelet[2448]: E1009 00:48:56.242474 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="800ms" Oct 9 00:48:56.345850 kubelet[2448]: I1009 00:48:56.345803 2448 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:48:56.346147 kubelet[2448]: E1009 00:48:56.346130 2448 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Oct 9 00:48:56.525706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1815704330.mount: Deactivated successfully. Oct 9 00:48:56.530768 containerd[1604]: time="2024-10-09T00:48:56.530727870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:48:56.531520 containerd[1604]: time="2024-10-09T00:48:56.531433721Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 9 00:48:56.532343 containerd[1604]: time="2024-10-09T00:48:56.532275122Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:48:56.535596 containerd[1604]: time="2024-10-09T00:48:56.535548543Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:48:56.536435 containerd[1604]: time="2024-10-09T00:48:56.536410145Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:48:56.536861 containerd[1604]: time="2024-10-09T00:48:56.536717078Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 00:48:56.537395 containerd[1604]: time="2024-10-09T00:48:56.537354913Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 00:48:56.539227 containerd[1604]: time="2024-10-09T00:48:56.539194148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:48:56.540450 containerd[1604]: time="2024-10-09T00:48:56.540409056Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 477.436182ms" Oct 9 00:48:56.542123 containerd[1604]: time="2024-10-09T00:48:56.542095465Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 478.868124ms" Oct 9 00:48:56.546545 containerd[1604]: time="2024-10-09T00:48:56.546415257Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 483.68058ms" Oct 9 00:48:56.623822 kubelet[2448]: W1009 00:48:56.623723 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:56.623822 kubelet[2448]: E1009 00:48:56.623786 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:56.704901 containerd[1604]: time="2024-10-09T00:48:56.704752931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:48:56.705248 containerd[1604]: time="2024-10-09T00:48:56.704875816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:48:56.705248 containerd[1604]: time="2024-10-09T00:48:56.705075455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:48:56.705248 containerd[1604]: time="2024-10-09T00:48:56.705191046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:48:56.706980 containerd[1604]: time="2024-10-09T00:48:56.706851684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:48:56.706980 containerd[1604]: time="2024-10-09T00:48:56.706912887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:48:56.706980 containerd[1604]: time="2024-10-09T00:48:56.706928718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:48:56.707694 containerd[1604]: time="2024-10-09T00:48:56.707600861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:48:56.708075 containerd[1604]: time="2024-10-09T00:48:56.707999137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:48:56.709569 containerd[1604]: time="2024-10-09T00:48:56.709462461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:48:56.709569 containerd[1604]: time="2024-10-09T00:48:56.709501740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:48:56.709659 containerd[1604]: time="2024-10-09T00:48:56.709598734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:48:56.717871 kubelet[2448]: W1009 00:48:56.717819 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:56.717871 kubelet[2448]: E1009 00:48:56.717875 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:56.733329 kubelet[2448]: W1009 00:48:56.733279 2448 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:56.733329 kubelet[2448]: E1009 00:48:56.733334 2448 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused Oct 9 00:48:56.754483 containerd[1604]: time="2024-10-09T00:48:56.754446309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ff8cbbc185954b5ed07bf32de004c851ae5393c7287bfab5e013c3d8b583dab\"" Oct 9 00:48:56.756237 kubelet[2448]: E1009 00:48:56.756213 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:48:56.757114 containerd[1604]: time="2024-10-09T00:48:56.756862778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"47864689c8fa0e01c1c3a51617c71df984efb3bbacf2e93ba7abf7bfc6b2ecdf\"" Oct 9 00:48:56.757510 kubelet[2448]: E1009 00:48:56.757331 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:48:56.758477 containerd[1604]: time="2024-10-09T00:48:56.758285821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:04c4298a874e765af2f6a110a5eefb7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd83b137a5148c5bc06826117c70c9b6ffc239ce7932457492bf7fc65e2f9647\"" Oct 9 00:48:56.760163 kubelet[2448]: E1009 00:48:56.760138 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:48:56.761058 containerd[1604]: time="2024-10-09T00:48:56.761006458Z" level=info msg="CreateContainer within sandbox \"47864689c8fa0e01c1c3a51617c71df984efb3bbacf2e93ba7abf7bfc6b2ecdf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 00:48:56.761702 containerd[1604]: time="2024-10-09T00:48:56.761671346Z" level=info msg="CreateContainer within sandbox \"3ff8cbbc185954b5ed07bf32de004c851ae5393c7287bfab5e013c3d8b583dab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 00:48:56.762027 containerd[1604]: time="2024-10-09T00:48:56.761998961Z" level=info msg="CreateContainer within sandbox \"bd83b137a5148c5bc06826117c70c9b6ffc239ce7932457492bf7fc65e2f9647\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 00:48:56.776175 containerd[1604]: time="2024-10-09T00:48:56.776081100Z" level=info msg="CreateContainer within sandbox \"47864689c8fa0e01c1c3a51617c71df984efb3bbacf2e93ba7abf7bfc6b2ecdf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8970c918f07cf52b8461514e37122b84e6dd8f34177746c3888659c2f0782bef\"" Oct 9 00:48:56.777506 containerd[1604]: time="2024-10-09T00:48:56.777394725Z" level=info msg="StartContainer for \"8970c918f07cf52b8461514e37122b84e6dd8f34177746c3888659c2f0782bef\"" Oct 9 00:48:56.779789 containerd[1604]: time="2024-10-09T00:48:56.779753919Z" level=info msg="CreateContainer within sandbox \"3ff8cbbc185954b5ed07bf32de004c851ae5393c7287bfab5e013c3d8b583dab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c7dd2b6e9aa15fb499e6e05f987b30b8a80ba703ffc239735e066b2bbc7f2602\"" Oct 9 00:48:56.780236 containerd[1604]: time="2024-10-09T00:48:56.780211353Z" level=info msg="StartContainer for \"c7dd2b6e9aa15fb499e6e05f987b30b8a80ba703ffc239735e066b2bbc7f2602\"" Oct 9 00:48:56.783591 containerd[1604]: time="2024-10-09T00:48:56.783471868Z" level=info msg="CreateContainer within sandbox \"bd83b137a5148c5bc06826117c70c9b6ffc239ce7932457492bf7fc65e2f9647\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"529ba1e7278dd2e8e9f76343b95e30007bdcd1acc77d3ac66e446bc50e138bbe\"" Oct 9 00:48:56.784131 containerd[1604]: time="2024-10-09T00:48:56.784100004Z" level=info msg="StartContainer for \"529ba1e7278dd2e8e9f76343b95e30007bdcd1acc77d3ac66e446bc50e138bbe\"" Oct 9 00:48:56.856007 containerd[1604]: time="2024-10-09T00:48:56.855956308Z" level=info msg="StartContainer for \"529ba1e7278dd2e8e9f76343b95e30007bdcd1acc77d3ac66e446bc50e138bbe\" returns successfully" Oct 9 00:48:56.856369 containerd[1604]: time="2024-10-09T00:48:56.856346849Z" level=info msg="StartContainer for \"8970c918f07cf52b8461514e37122b84e6dd8f34177746c3888659c2f0782bef\" returns successfully" Oct 9 00:48:56.856594 containerd[1604]: time="2024-10-09T00:48:56.856564644Z" level=info msg="StartContainer for \"c7dd2b6e9aa15fb499e6e05f987b30b8a80ba703ffc239735e066b2bbc7f2602\" returns successfully" Oct 9 00:48:57.044102 kubelet[2448]: E1009 00:48:57.043969 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="1.6s" Oct 9 00:48:57.147923 kubelet[2448]: I1009 00:48:57.147880 2448 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:48:57.664784 kubelet[2448]: E1009 00:48:57.664721 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:48:57.666928 kubelet[2448]: E1009 00:48:57.666879 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:48:57.668701 kubelet[2448]: E1009 00:48:57.668674 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:48:58.671981 kubelet[2448]: E1009 00:48:58.671952 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:48:58.956149 kubelet[2448]: E1009 00:48:58.955988 2448 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 00:48:59.011050 kubelet[2448]: I1009 00:48:59.010841 2448 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 00:48:59.019569 kubelet[2448]: E1009 00:48:59.019538 2448 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:48:59.119925 kubelet[2448]: E1009 00:48:59.119862 2448 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:48:59.622011 kubelet[2448]: I1009 00:48:59.621948 2448 apiserver.go:52] "Watching apiserver" Oct 9 00:48:59.641352 kubelet[2448]: I1009 00:48:59.641307 2448 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 00:49:00.114925 kubelet[2448]: E1009 00:49:00.114805 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:00.672534 kubelet[2448]: E1009 00:49:00.672503 2448 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:01.358156 systemd[1]: Reloading requested from client PID 2727 ('systemctl') (unit session-7.scope)... Oct 9 00:49:01.358174 systemd[1]: Reloading... Oct 9 00:49:01.404089 zram_generator::config[2769]: No configuration found. Oct 9 00:49:01.517781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:49:01.572452 systemd[1]: Reloading finished in 213 ms. Oct 9 00:49:01.595380 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:49:01.610854 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 00:49:01.611219 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:49:01.620419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:49:01.705545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:49:01.709061 (kubelet)[2818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 00:49:01.756913 kubelet[2818]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:49:01.756913 kubelet[2818]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 00:49:01.756913 kubelet[2818]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:49:01.757288 kubelet[2818]: I1009 00:49:01.756972 2818 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 00:49:01.761561 kubelet[2818]: I1009 00:49:01.761523 2818 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 00:49:01.761626 kubelet[2818]: I1009 00:49:01.761554 2818 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 00:49:01.761812 kubelet[2818]: I1009 00:49:01.761785 2818 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 00:49:01.764333 kubelet[2818]: I1009 00:49:01.764297 2818 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 00:49:01.766474 kubelet[2818]: I1009 00:49:01.766437 2818 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 00:49:01.772006 kubelet[2818]: I1009 00:49:01.771982 2818 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 00:49:01.777881 kubelet[2818]: I1009 00:49:01.772470 2818 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 00:49:01.777881 kubelet[2818]: I1009 00:49:01.772621 2818 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 00:49:01.777881 kubelet[2818]: I1009 00:49:01.772640 2818 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 00:49:01.777881 kubelet[2818]: I1009 00:49:01.772649 2818 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 00:49:01.777881 kubelet[2818]: I1009 00:49:01.772681 2818 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:49:01.777881 kubelet[2818]: I1009 00:49:01.772772 2818 kubelet.go:396] "Attempting to sync node with API server" Oct 9 00:49:01.778097 kubelet[2818]: I1009 00:49:01.772785 2818 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 00:49:01.778097 kubelet[2818]: I1009 00:49:01.772803 2818 kubelet.go:312] "Adding apiserver pod source" Oct 9 00:49:01.778097 kubelet[2818]: I1009 00:49:01.772816 2818 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 00:49:01.778097 kubelet[2818]: I1009 00:49:01.776914 2818 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 00:49:01.778097 kubelet[2818]: I1009 00:49:01.777099 2818 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 00:49:01.778097 kubelet[2818]: I1009 00:49:01.777461 2818 server.go:1256] "Started kubelet" Oct 9 00:49:01.778097 kubelet[2818]: I1009 00:49:01.777524 2818 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 00:49:01.778097 kubelet[2818]: I1009 00:49:01.777656 2818 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 00:49:01.778097 kubelet[2818]: I1009 00:49:01.777831 2818 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 00:49:01.778286 kubelet[2818]: I1009 00:49:01.778262 2818 server.go:461] "Adding debug handlers to kubelet server" Oct 9 00:49:01.786054 kubelet[2818]: I1009 00:49:01.785476 2818 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 00:49:01.789851 kubelet[2818]: I1009 00:49:01.789830 2818 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 00:49:01.790013 kubelet[2818]: I1009 00:49:01.789999 2818 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 00:49:01.790376 kubelet[2818]: I1009 00:49:01.790354 2818 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 00:49:01.798471 kubelet[2818]: E1009 00:49:01.798447 2818 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 00:49:01.798879 kubelet[2818]: I1009 00:49:01.798846 2818 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 00:49:01.800633 kubelet[2818]: I1009 00:49:01.800613 2818 factory.go:221] Registration of the containerd container factory successfully Oct 9 00:49:01.800633 kubelet[2818]: I1009 00:49:01.800630 2818 factory.go:221] Registration of the systemd container factory successfully Oct 9 00:49:01.813930 kubelet[2818]: I1009 00:49:01.813865 2818 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 00:49:01.815454 kubelet[2818]: I1009 00:49:01.815414 2818 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 00:49:01.815454 kubelet[2818]: I1009 00:49:01.815444 2818 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 00:49:01.815454 kubelet[2818]: I1009 00:49:01.815459 2818 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 00:49:01.815563 kubelet[2818]: E1009 00:49:01.815501 2818 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 00:49:01.851163 kubelet[2818]: I1009 00:49:01.851136 2818 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 00:49:01.851163 kubelet[2818]: I1009 00:49:01.851159 2818 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 00:49:01.851312 kubelet[2818]: I1009 00:49:01.851177 2818 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:49:01.851357 kubelet[2818]: I1009 00:49:01.851343 2818 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 00:49:01.851382 kubelet[2818]: I1009 00:49:01.851365 2818 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 00:49:01.851382 kubelet[2818]: I1009 00:49:01.851373 2818 policy_none.go:49] "None policy: Start" Oct 9 00:49:01.852621 kubelet[2818]: I1009 00:49:01.852591 2818 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 00:49:01.853275 kubelet[2818]: I1009 00:49:01.852712 2818 state_mem.go:35] "Initializing new in-memory state store" Oct 9 00:49:01.853275 kubelet[2818]: I1009 00:49:01.852884 2818 state_mem.go:75] "Updated machine memory state" Oct 9 00:49:01.853940 kubelet[2818]: I1009 00:49:01.853914 2818 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 00:49:01.854214 kubelet[2818]: I1009 00:49:01.854151 2818 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 00:49:01.894079 kubelet[2818]: I1009 00:49:01.893603 2818 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:49:01.916155 kubelet[2818]: I1009 00:49:01.916121 2818 topology_manager.go:215] "Topology Admit Handler" podUID="04c4298a874e765af2f6a110a5eefb7e" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 00:49:01.916257 kubelet[2818]: I1009 00:49:01.916236 2818 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 00:49:01.916360 kubelet[2818]: I1009 00:49:01.916336 2818 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 00:49:01.928771 kubelet[2818]: E1009 00:49:01.928306 2818 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 00:49:01.928998 kubelet[2818]: I1009 00:49:01.928972 2818 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 9 00:49:01.929085 kubelet[2818]: I1009 00:49:01.929071 2818 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 00:49:02.091699 kubelet[2818]: I1009 00:49:02.091324 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04c4298a874e765af2f6a110a5eefb7e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"04c4298a874e765af2f6a110a5eefb7e\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:49:02.091699 kubelet[2818]: I1009 00:49:02.091373 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:49:02.091699 kubelet[2818]: I1009 00:49:02.091397 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:49:02.091699 kubelet[2818]: I1009 00:49:02.091418 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:49:02.091699 kubelet[2818]: I1009 00:49:02.091440 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 00:49:02.091921 kubelet[2818]: I1009 00:49:02.091458 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04c4298a874e765af2f6a110a5eefb7e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"04c4298a874e765af2f6a110a5eefb7e\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:49:02.091921 kubelet[2818]: I1009 00:49:02.091477 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04c4298a874e765af2f6a110a5eefb7e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"04c4298a874e765af2f6a110a5eefb7e\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:49:02.091921 kubelet[2818]: I1009 00:49:02.091497 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:49:02.091921 kubelet[2818]: I1009 00:49:02.091518 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:49:02.230180 kubelet[2818]: E1009 00:49:02.229847 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:02.230180 kubelet[2818]: E1009 00:49:02.230006 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:02.230180 kubelet[2818]: E1009 00:49:02.230062 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:02.774440 kubelet[2818]: I1009 00:49:02.774391 2818 apiserver.go:52] "Watching apiserver" Oct 9 00:49:02.823782 kubelet[2818]: E1009 00:49:02.823737 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:02.833016 kubelet[2818]: E1009 00:49:02.832972 2818 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 00:49:02.833519 kubelet[2818]: E1009 00:49:02.833494 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:02.839291 kubelet[2818]: E1009 00:49:02.839260 2818 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 9 00:49:02.842178 kubelet[2818]: E1009 00:49:02.842130 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:02.889952 kubelet[2818]: I1009 00:49:02.889919 2818 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.889863677 podStartE2EDuration="2.889863677s" podCreationTimestamp="2024-10-09 00:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:49:02.883276643 +0000 UTC m=+1.170901038" watchObservedRunningTime="2024-10-09 00:49:02.889863677 +0000 UTC m=+1.177488072" Oct 9 00:49:02.890581 kubelet[2818]: I1009 00:49:02.890299 2818 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 00:49:02.901081 kubelet[2818]: I1009 00:49:02.898558 2818 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.89852237 podStartE2EDuration="1.89852237s" podCreationTimestamp="2024-10-09 00:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:49:02.890464896 +0000 UTC m=+1.178089291" watchObservedRunningTime="2024-10-09 00:49:02.89852237 +0000 UTC m=+1.186146725" Oct 9 00:49:02.907190 kubelet[2818]: I1009 00:49:02.907133 2818 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.907095627 podStartE2EDuration="1.907095627s" podCreationTimestamp="2024-10-09 00:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:49:02.898660534 +0000 UTC m=+1.186284929" watchObservedRunningTime="2024-10-09 00:49:02.907095627 +0000 UTC m=+1.194720022" Oct 9 00:49:03.825571 kubelet[2818]: E1009 00:49:03.825257 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:03.825571 kubelet[2818]: E1009 00:49:03.825517 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:04.834691 kubelet[2818]: E1009 00:49:04.834659 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:05.874949 sudo[1812]: pam_unix(sudo:session): session closed for user root Oct 9 00:49:05.878946 sshd[1805]: pam_unix(sshd:session): session closed for user core Oct 9 00:49:05.882477 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:49318.service: Deactivated successfully. Oct 9 00:49:05.884333 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Oct 9 00:49:05.884421 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 00:49:05.885318 systemd-logind[1577]: Removed session 7. Oct 9 00:49:12.003024 kubelet[2818]: E1009 00:49:12.002993 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:12.674263 kubelet[2818]: E1009 00:49:12.674234 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:12.836069 kubelet[2818]: E1009 00:49:12.836025 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:14.843979 kubelet[2818]: E1009 00:49:14.842763 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:16.133176 update_engine[1589]: I20241009 00:49:16.133102 1589 update_attempter.cc:509] Updating boot flags... Oct 9 00:49:16.157840 kubelet[2818]: I1009 00:49:16.157808 2818 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 00:49:16.161343 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2914) Oct 9 00:49:16.164905 containerd[1604]: time="2024-10-09T00:49:16.164862643Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 00:49:16.166341 kubelet[2818]: I1009 00:49:16.165116 2818 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 00:49:16.191537 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2913) Oct 9 00:49:16.949930 kubelet[2818]: I1009 00:49:16.949853 2818 topology_manager.go:215] "Topology Admit Handler" podUID="1eace17f-9332-442e-8a01-414bc10309f5" podNamespace="kube-system" podName="kube-proxy-9rm74" Oct 9 00:49:16.995435 kubelet[2818]: I1009 00:49:16.995408 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1eace17f-9332-442e-8a01-414bc10309f5-kube-proxy\") pod \"kube-proxy-9rm74\" (UID: \"1eace17f-9332-442e-8a01-414bc10309f5\") " pod="kube-system/kube-proxy-9rm74" Oct 9 00:49:16.995435 kubelet[2818]: I1009 00:49:16.995445 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1eace17f-9332-442e-8a01-414bc10309f5-xtables-lock\") pod \"kube-proxy-9rm74\" (UID: \"1eace17f-9332-442e-8a01-414bc10309f5\") " pod="kube-system/kube-proxy-9rm74" Oct 9 00:49:16.995574 kubelet[2818]: I1009 00:49:16.995466 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1eace17f-9332-442e-8a01-414bc10309f5-lib-modules\") pod \"kube-proxy-9rm74\" (UID: \"1eace17f-9332-442e-8a01-414bc10309f5\") " pod="kube-system/kube-proxy-9rm74" Oct 9 00:49:16.995574 kubelet[2818]: I1009 00:49:16.995490 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whslb\" (UniqueName: \"kubernetes.io/projected/1eace17f-9332-442e-8a01-414bc10309f5-kube-api-access-whslb\") pod \"kube-proxy-9rm74\" (UID: \"1eace17f-9332-442e-8a01-414bc10309f5\") " pod="kube-system/kube-proxy-9rm74" Oct 9 00:49:17.214497 kubelet[2818]: I1009 00:49:17.214402 2818 topology_manager.go:215] "Topology Admit Handler" podUID="19a42114-d002-4e46-b41e-40b88ccc90de" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-7rv42" Oct 9 00:49:17.254710 kubelet[2818]: E1009 00:49:17.254293 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:17.255471 containerd[1604]: time="2024-10-09T00:49:17.254950567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rm74,Uid:1eace17f-9332-442e-8a01-414bc10309f5,Namespace:kube-system,Attempt:0,}" Oct 9 00:49:17.283351 containerd[1604]: time="2024-10-09T00:49:17.283225075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:49:17.283351 containerd[1604]: time="2024-10-09T00:49:17.283290111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:49:17.283351 containerd[1604]: time="2024-10-09T00:49:17.283304399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:17.283615 containerd[1604]: time="2024-10-09T00:49:17.283565224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:17.297297 kubelet[2818]: I1009 00:49:17.297173 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5868c\" (UniqueName: \"kubernetes.io/projected/19a42114-d002-4e46-b41e-40b88ccc90de-kube-api-access-5868c\") pod \"tigera-operator-5d56685c77-7rv42\" (UID: \"19a42114-d002-4e46-b41e-40b88ccc90de\") " pod="tigera-operator/tigera-operator-5d56685c77-7rv42" Oct 9 00:49:17.297297 kubelet[2818]: I1009 00:49:17.297246 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19a42114-d002-4e46-b41e-40b88ccc90de-var-lib-calico\") pod \"tigera-operator-5d56685c77-7rv42\" (UID: \"19a42114-d002-4e46-b41e-40b88ccc90de\") " pod="tigera-operator/tigera-operator-5d56685c77-7rv42" Oct 9 00:49:17.313990 containerd[1604]: time="2024-10-09T00:49:17.313947823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rm74,Uid:1eace17f-9332-442e-8a01-414bc10309f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"12786a43b80aec10e9136b0f0c2d6f4ba03cded9e4a708b2f13fe295c5ca5abd\"" Oct 9 00:49:17.314603 kubelet[2818]: E1009 00:49:17.314583 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:17.316427 containerd[1604]: time="2024-10-09T00:49:17.316398385Z" level=info msg="CreateContainer within sandbox \"12786a43b80aec10e9136b0f0c2d6f4ba03cded9e4a708b2f13fe295c5ca5abd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 00:49:17.342077 containerd[1604]: time="2024-10-09T00:49:17.341866013Z" level=info msg="CreateContainer within sandbox \"12786a43b80aec10e9136b0f0c2d6f4ba03cded9e4a708b2f13fe295c5ca5abd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a157614ec407308a0eaa607f7f6aa0a99cb685f79d82be822d51bd61b9eb9ded\"" Oct 9 00:49:17.343073 containerd[1604]: time="2024-10-09T00:49:17.342497844Z" level=info msg="StartContainer for \"a157614ec407308a0eaa607f7f6aa0a99cb685f79d82be822d51bd61b9eb9ded\"" Oct 9 00:49:17.391469 containerd[1604]: time="2024-10-09T00:49:17.391323689Z" level=info msg="StartContainer for \"a157614ec407308a0eaa607f7f6aa0a99cb685f79d82be822d51bd61b9eb9ded\" returns successfully" Oct 9 00:49:17.519776 containerd[1604]: time="2024-10-09T00:49:17.519328802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-7rv42,Uid:19a42114-d002-4e46-b41e-40b88ccc90de,Namespace:tigera-operator,Attempt:0,}" Oct 9 00:49:17.539878 containerd[1604]: time="2024-10-09T00:49:17.539551037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:49:17.539878 containerd[1604]: time="2024-10-09T00:49:17.539604106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:49:17.539878 containerd[1604]: time="2024-10-09T00:49:17.539619795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:17.539878 containerd[1604]: time="2024-10-09T00:49:17.539698599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:17.580983 containerd[1604]: time="2024-10-09T00:49:17.580949035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-7rv42,Uid:19a42114-d002-4e46-b41e-40b88ccc90de,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2fac90fbdf76fe6db8a93d335de5aebebeb92f69a55f28e18d823358ce2c3d93\"" Oct 9 00:49:17.583536 containerd[1604]: time="2024-10-09T00:49:17.583453867Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 00:49:17.845418 kubelet[2818]: E1009 00:49:17.845390 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:17.853802 kubelet[2818]: I1009 00:49:17.853753 2818 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9rm74" podStartSLOduration=1.853718652 podStartE2EDuration="1.853718652s" podCreationTimestamp="2024-10-09 00:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:49:17.853591981 +0000 UTC m=+16.141216376" watchObservedRunningTime="2024-10-09 00:49:17.853718652 +0000 UTC m=+16.141343047" Oct 9 00:49:18.118068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1978787262.mount: Deactivated successfully. Oct 9 00:49:18.480336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1664924426.mount: Deactivated successfully. Oct 9 00:49:18.959122 containerd[1604]: time="2024-10-09T00:49:18.959028557Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:18.959526 containerd[1604]: time="2024-10-09T00:49:18.959473833Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485907" Oct 9 00:49:18.960468 containerd[1604]: time="2024-10-09T00:49:18.960432420Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:18.962617 containerd[1604]: time="2024-10-09T00:49:18.962568989Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:18.963549 containerd[1604]: time="2024-10-09T00:49:18.963522293Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 1.380025523s" Oct 9 00:49:18.963595 containerd[1604]: time="2024-10-09T00:49:18.963554310Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 9 00:49:18.971676 containerd[1604]: time="2024-10-09T00:49:18.971633581Z" level=info msg="CreateContainer within sandbox \"2fac90fbdf76fe6db8a93d335de5aebebeb92f69a55f28e18d823358ce2c3d93\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 00:49:18.980726 containerd[1604]: time="2024-10-09T00:49:18.980625894Z" level=info msg="CreateContainer within sandbox \"2fac90fbdf76fe6db8a93d335de5aebebeb92f69a55f28e18d823358ce2c3d93\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"05f50b18a229a08bfb6c01e72e0dd0644bbfb270fb9350821fc192a3b23ed37d\"" Oct 9 00:49:18.982300 containerd[1604]: time="2024-10-09T00:49:18.982271004Z" level=info msg="StartContainer for \"05f50b18a229a08bfb6c01e72e0dd0644bbfb270fb9350821fc192a3b23ed37d\"" Oct 9 00:49:19.026762 containerd[1604]: time="2024-10-09T00:49:19.025909205Z" level=info msg="StartContainer for \"05f50b18a229a08bfb6c01e72e0dd0644bbfb270fb9350821fc192a3b23ed37d\" returns successfully" Oct 9 00:49:21.845618 kubelet[2818]: I1009 00:49:21.845566 2818 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-7rv42" podStartSLOduration=3.46260603 podStartE2EDuration="4.845522072s" podCreationTimestamp="2024-10-09 00:49:17 +0000 UTC" firstStartedPulling="2024-10-09 00:49:17.582327961 +0000 UTC m=+15.869952356" lastFinishedPulling="2024-10-09 00:49:18.965244043 +0000 UTC m=+17.252868398" observedRunningTime="2024-10-09 00:49:19.881646258 +0000 UTC m=+18.169270693" watchObservedRunningTime="2024-10-09 00:49:21.845522072 +0000 UTC m=+20.133146508" Oct 9 00:49:23.387962 kubelet[2818]: I1009 00:49:23.387736 2818 topology_manager.go:215] "Topology Admit Handler" podUID="a221e4d1-1d3e-449b-b5bf-140c2b9eff3d" podNamespace="calico-system" podName="calico-typha-58f6cbfcf4-9bvrg" Oct 9 00:49:23.436528 kubelet[2818]: I1009 00:49:23.436470 2818 topology_manager.go:215] "Topology Admit Handler" podUID="83df4c75-d162-435a-8a53-e773a28c0ff2" podNamespace="calico-system" podName="calico-node-752sq" Oct 9 00:49:23.442876 kubelet[2818]: I1009 00:49:23.442838 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83df4c75-d162-435a-8a53-e773a28c0ff2-lib-modules\") pod \"calico-node-752sq\" (UID: \"83df4c75-d162-435a-8a53-e773a28c0ff2\") " pod="calico-system/calico-node-752sq" Oct 9 00:49:23.442876 kubelet[2818]: I1009 00:49:23.442876 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83df4c75-d162-435a-8a53-e773a28c0ff2-tigera-ca-bundle\") pod \"calico-node-752sq\" (UID: \"83df4c75-d162-435a-8a53-e773a28c0ff2\") " pod="calico-system/calico-node-752sq" Oct 9 00:49:23.443039 kubelet[2818]: I1009 00:49:23.442898 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/83df4c75-d162-435a-8a53-e773a28c0ff2-var-lib-calico\") pod \"calico-node-752sq\" (UID: \"83df4c75-d162-435a-8a53-e773a28c0ff2\") " pod="calico-system/calico-node-752sq" Oct 9 00:49:23.443039 kubelet[2818]: I1009 00:49:23.442919 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83df4c75-d162-435a-8a53-e773a28c0ff2-xtables-lock\") pod \"calico-node-752sq\" (UID: \"83df4c75-d162-435a-8a53-e773a28c0ff2\") " pod="calico-system/calico-node-752sq" Oct 9 00:49:23.443039 kubelet[2818]: I1009 00:49:23.442949 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/83df4c75-d162-435a-8a53-e773a28c0ff2-cni-net-dir\") pod \"calico-node-752sq\" (UID: \"83df4c75-d162-435a-8a53-e773a28c0ff2\") " pod="calico-system/calico-node-752sq" Oct 9 00:49:23.443039 kubelet[2818]: I1009 00:49:23.442969 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/83df4c75-d162-435a-8a53-e773a28c0ff2-flexvol-driver-host\") pod \"calico-node-752sq\" (UID: \"83df4c75-d162-435a-8a53-e773a28c0ff2\") " pod="calico-system/calico-node-752sq" Oct 9 00:49:23.443039 kubelet[2818]: I1009 00:49:23.442990 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/83df4c75-d162-435a-8a53-e773a28c0ff2-var-run-calico\") pod \"calico-node-752sq\" (UID: \"83df4c75-d162-435a-8a53-e773a28c0ff2\") " pod="calico-system/calico-node-752sq" Oct 9 00:49:23.443170 kubelet[2818]: I1009 00:49:23.443011 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vkvk\" (UniqueName: \"kubernetes.io/projected/a221e4d1-1d3e-449b-b5bf-140c2b9eff3d-kube-api-access-6vkvk\") pod \"calico-typha-58f6cbfcf4-9bvrg\" (UID: \"a221e4d1-1d3e-449b-b5bf-140c2b9eff3d\") " pod="calico-system/calico-typha-58f6cbfcf4-9bvrg" Oct 9 00:49:23.443170 kubelet[2818]: I1009 00:49:23.443030 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/83df4c75-d162-435a-8a53-e773a28c0ff2-node-certs\") pod \"calico-node-752sq\" (UID: \"83df4c75-d162-435a-8a53-e773a28c0ff2\") " pod="calico-system/calico-node-752sq" Oct 9 00:49:23.448292 kubelet[2818]: I1009 00:49:23.448115 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/83df4c75-d162-435a-8a53-e773a28c0ff2-cni-bin-dir\") pod \"calico-node-752sq\" (UID: \"83df4c75-d162-435a-8a53-e773a28c0ff2\") " pod="calico-system/calico-node-752sq" Oct 9 00:49:23.448292 kubelet[2818]: I1009 00:49:23.448159 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/83df4c75-d162-435a-8a53-e773a28c0ff2-cni-log-dir\") pod \"calico-node-752sq\" (UID: \"83df4c75-d162-435a-8a53-e773a28c0ff2\") " pod="calico-system/calico-node-752sq" Oct 9 00:49:23.448292 kubelet[2818]: I1009 00:49:23.448190 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a221e4d1-1d3e-449b-b5bf-140c2b9eff3d-tigera-ca-bundle\") pod \"calico-typha-58f6cbfcf4-9bvrg\" (UID: \"a221e4d1-1d3e-449b-b5bf-140c2b9eff3d\") " pod="calico-system/calico-typha-58f6cbfcf4-9bvrg" Oct 9 00:49:23.448292 kubelet[2818]: I1009 00:49:23.448235 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a221e4d1-1d3e-449b-b5bf-140c2b9eff3d-typha-certs\") pod \"calico-typha-58f6cbfcf4-9bvrg\" (UID: \"a221e4d1-1d3e-449b-b5bf-140c2b9eff3d\") " pod="calico-system/calico-typha-58f6cbfcf4-9bvrg" Oct 9 00:49:23.448292 kubelet[2818]: I1009 00:49:23.448269 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/83df4c75-d162-435a-8a53-e773a28c0ff2-policysync\") pod \"calico-node-752sq\" (UID: \"83df4c75-d162-435a-8a53-e773a28c0ff2\") " pod="calico-system/calico-node-752sq" Oct 9 00:49:23.448558 kubelet[2818]: I1009 00:49:23.448511 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hvfl\" (UniqueName: \"kubernetes.io/projected/83df4c75-d162-435a-8a53-e773a28c0ff2-kube-api-access-5hvfl\") pod \"calico-node-752sq\" (UID: \"83df4c75-d162-435a-8a53-e773a28c0ff2\") " pod="calico-system/calico-node-752sq" Oct 9 00:49:23.550631 kubelet[2818]: I1009 00:49:23.549141 2818 topology_manager.go:215] "Topology Admit Handler" podUID="a439271e-ca34-412b-84ad-b23f24ed45b0" podNamespace="calico-system" podName="csi-node-driver-pjbzb" Oct 9 00:49:23.550868 kubelet[2818]: E1009 00:49:23.550816 2818 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjbzb" podUID="a439271e-ca34-412b-84ad-b23f24ed45b0" Oct 9 00:49:23.561534 kubelet[2818]: E1009 00:49:23.561088 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.561534 kubelet[2818]: W1009 00:49:23.561116 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.561534 kubelet[2818]: E1009 00:49:23.561152 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.561534 kubelet[2818]: E1009 00:49:23.561489 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.561534 kubelet[2818]: W1009 00:49:23.561502 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.561534 kubelet[2818]: E1009 00:49:23.561518 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.563586 kubelet[2818]: E1009 00:49:23.563160 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.563586 kubelet[2818]: W1009 00:49:23.563174 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.563586 kubelet[2818]: E1009 00:49:23.563191 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.565603 kubelet[2818]: E1009 00:49:23.565265 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.565603 kubelet[2818]: W1009 00:49:23.565283 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.575068 kubelet[2818]: E1009 00:49:23.567708 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.583873 kubelet[2818]: E1009 00:49:23.583844 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.584691 kubelet[2818]: W1009 00:49:23.584148 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.585225 kubelet[2818]: E1009 00:49:23.585086 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.585586 kubelet[2818]: E1009 00:49:23.585573 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.586366 kubelet[2818]: W1009 00:49:23.586310 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.586366 kubelet[2818]: E1009 00:49:23.586354 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.586717 kubelet[2818]: E1009 00:49:23.586644 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.586717 kubelet[2818]: W1009 00:49:23.586657 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.586717 kubelet[2818]: E1009 00:49:23.586692 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.586998 kubelet[2818]: E1009 00:49:23.586977 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.587130 kubelet[2818]: W1009 00:49:23.587075 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.587390 kubelet[2818]: E1009 00:49:23.587374 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.588327 kubelet[2818]: E1009 00:49:23.588260 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.588327 kubelet[2818]: W1009 00:49:23.588277 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.588890 kubelet[2818]: E1009 00:49:23.588847 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.589182 kubelet[2818]: E1009 00:49:23.589137 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.589182 kubelet[2818]: W1009 00:49:23.589151 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.589344 kubelet[2818]: E1009 00:49:23.589254 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.589512 kubelet[2818]: E1009 00:49:23.589494 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.589512 kubelet[2818]: W1009 00:49:23.589505 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.591244 kubelet[2818]: E1009 00:49:23.589730 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.591244 kubelet[2818]: E1009 00:49:23.590920 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.591244 kubelet[2818]: W1009 00:49:23.590937 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.591244 kubelet[2818]: E1009 00:49:23.590974 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.592366 kubelet[2818]: E1009 00:49:23.592345 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.592366 kubelet[2818]: W1009 00:49:23.592367 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.592470 kubelet[2818]: E1009 00:49:23.592388 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.593063 kubelet[2818]: E1009 00:49:23.592592 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.593063 kubelet[2818]: W1009 00:49:23.592604 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.593063 kubelet[2818]: E1009 00:49:23.592674 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.593063 kubelet[2818]: E1009 00:49:23.592765 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.593063 kubelet[2818]: W1009 00:49:23.592772 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.593063 kubelet[2818]: E1009 00:49:23.592829 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.593063 kubelet[2818]: E1009 00:49:23.592953 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.593063 kubelet[2818]: W1009 00:49:23.592960 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.593063 kubelet[2818]: E1009 00:49:23.593021 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.593279 kubelet[2818]: E1009 00:49:23.593162 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.593279 kubelet[2818]: W1009 00:49:23.593170 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.593279 kubelet[2818]: E1009 00:49:23.593220 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.594837 kubelet[2818]: E1009 00:49:23.593395 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.594837 kubelet[2818]: W1009 00:49:23.593402 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.594837 kubelet[2818]: E1009 00:49:23.593475 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.594837 kubelet[2818]: E1009 00:49:23.593583 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.594837 kubelet[2818]: W1009 00:49:23.593590 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.594837 kubelet[2818]: E1009 00:49:23.593663 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.594837 kubelet[2818]: E1009 00:49:23.593714 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.594837 kubelet[2818]: W1009 00:49:23.593719 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.594837 kubelet[2818]: E1009 00:49:23.593744 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.594837 kubelet[2818]: E1009 00:49:23.593918 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.595077 kubelet[2818]: W1009 00:49:23.593926 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.595077 kubelet[2818]: E1009 00:49:23.593937 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.597696 kubelet[2818]: E1009 00:49:23.596106 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.597696 kubelet[2818]: W1009 00:49:23.596122 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.597696 kubelet[2818]: E1009 00:49:23.596233 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.600459 kubelet[2818]: E1009 00:49:23.600432 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.600529 kubelet[2818]: W1009 00:49:23.600453 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.600529 kubelet[2818]: E1009 00:49:23.600485 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.600970 kubelet[2818]: E1009 00:49:23.600944 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.600970 kubelet[2818]: W1009 00:49:23.600966 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.601091 kubelet[2818]: E1009 00:49:23.601071 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.601770 kubelet[2818]: E1009 00:49:23.601663 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.601770 kubelet[2818]: W1009 00:49:23.601770 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.603069 kubelet[2818]: E1009 00:49:23.601951 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.603069 kubelet[2818]: E1009 00:49:23.602471 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.603069 kubelet[2818]: W1009 00:49:23.602482 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.603069 kubelet[2818]: E1009 00:49:23.602861 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.605519 kubelet[2818]: E1009 00:49:23.605346 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.605519 kubelet[2818]: W1009 00:49:23.605373 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.605519 kubelet[2818]: E1009 00:49:23.605486 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.606083 kubelet[2818]: E1009 00:49:23.605896 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.606083 kubelet[2818]: W1009 00:49:23.605914 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.606083 kubelet[2818]: E1009 00:49:23.606014 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.606422 kubelet[2818]: E1009 00:49:23.606330 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.606422 kubelet[2818]: W1009 00:49:23.606342 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.606547 kubelet[2818]: E1009 00:49:23.606534 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.606694 kubelet[2818]: E1009 00:49:23.606684 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.606808 kubelet[2818]: W1009 00:49:23.606736 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.606978 kubelet[2818]: E1009 00:49:23.606961 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.608475 kubelet[2818]: E1009 00:49:23.608414 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.608475 kubelet[2818]: W1009 00:49:23.608427 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.608475 kubelet[2818]: E1009 00:49:23.608450 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.613855 kubelet[2818]: E1009 00:49:23.613839 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.614010 kubelet[2818]: W1009 00:49:23.613889 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.614010 kubelet[2818]: E1009 00:49:23.613905 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.645251 kubelet[2818]: E1009 00:49:23.645137 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.645251 kubelet[2818]: W1009 00:49:23.645156 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.645251 kubelet[2818]: E1009 00:49:23.645175 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.646278 kubelet[2818]: E1009 00:49:23.646264 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.646278 kubelet[2818]: W1009 00:49:23.646278 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.646364 kubelet[2818]: E1009 00:49:23.646290 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.646465 kubelet[2818]: E1009 00:49:23.646455 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.646465 kubelet[2818]: W1009 00:49:23.646465 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.646542 kubelet[2818]: E1009 00:49:23.646477 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.646717 kubelet[2818]: E1009 00:49:23.646703 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.646749 kubelet[2818]: W1009 00:49:23.646717 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.646749 kubelet[2818]: E1009 00:49:23.646732 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.647403 kubelet[2818]: E1009 00:49:23.647389 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.647403 kubelet[2818]: W1009 00:49:23.647403 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.647494 kubelet[2818]: E1009 00:49:23.647418 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.648307 kubelet[2818]: E1009 00:49:23.648294 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.648504 kubelet[2818]: W1009 00:49:23.648308 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.648504 kubelet[2818]: E1009 00:49:23.648497 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.650157 kubelet[2818]: E1009 00:49:23.650141 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.650157 kubelet[2818]: W1009 00:49:23.650155 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.650244 kubelet[2818]: E1009 00:49:23.650168 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.650383 kubelet[2818]: E1009 00:49:23.650371 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.650383 kubelet[2818]: W1009 00:49:23.650382 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.650437 kubelet[2818]: E1009 00:49:23.650392 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.650584 kubelet[2818]: E1009 00:49:23.650573 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.650584 kubelet[2818]: W1009 00:49:23.650582 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.650652 kubelet[2818]: E1009 00:49:23.650593 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.650753 kubelet[2818]: E1009 00:49:23.650742 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.650753 kubelet[2818]: W1009 00:49:23.650751 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.650817 kubelet[2818]: E1009 00:49:23.650761 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.650909 kubelet[2818]: E1009 00:49:23.650899 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.650909 kubelet[2818]: W1009 00:49:23.650908 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.650977 kubelet[2818]: E1009 00:49:23.650917 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.651086 kubelet[2818]: E1009 00:49:23.651076 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.651086 kubelet[2818]: W1009 00:49:23.651084 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.651145 kubelet[2818]: E1009 00:49:23.651093 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.651259 kubelet[2818]: E1009 00:49:23.651247 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.651259 kubelet[2818]: W1009 00:49:23.651257 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.651320 kubelet[2818]: E1009 00:49:23.651266 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.651433 kubelet[2818]: E1009 00:49:23.651422 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.651433 kubelet[2818]: W1009 00:49:23.651431 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.651492 kubelet[2818]: E1009 00:49:23.651439 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.651581 kubelet[2818]: E1009 00:49:23.651572 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.651581 kubelet[2818]: W1009 00:49:23.651580 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.651651 kubelet[2818]: E1009 00:49:23.651589 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.651716 kubelet[2818]: E1009 00:49:23.651704 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.651742 kubelet[2818]: W1009 00:49:23.651736 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.651742 kubelet[2818]: E1009 00:49:23.651748 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.651908 kubelet[2818]: E1009 00:49:23.651895 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.651908 kubelet[2818]: W1009 00:49:23.651905 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.651966 kubelet[2818]: E1009 00:49:23.651915 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.652058 kubelet[2818]: E1009 00:49:23.652049 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.652058 kubelet[2818]: W1009 00:49:23.652058 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.652112 kubelet[2818]: E1009 00:49:23.652067 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.652192 kubelet[2818]: E1009 00:49:23.652183 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.652192 kubelet[2818]: W1009 00:49:23.652192 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.652268 kubelet[2818]: E1009 00:49:23.652200 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.652334 kubelet[2818]: E1009 00:49:23.652320 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.652334 kubelet[2818]: W1009 00:49:23.652333 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.652396 kubelet[2818]: E1009 00:49:23.652343 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.652614 kubelet[2818]: E1009 00:49:23.652602 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.652614 kubelet[2818]: W1009 00:49:23.652612 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.652709 kubelet[2818]: E1009 00:49:23.652623 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.652709 kubelet[2818]: I1009 00:49:23.652648 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a439271e-ca34-412b-84ad-b23f24ed45b0-kubelet-dir\") pod \"csi-node-driver-pjbzb\" (UID: \"a439271e-ca34-412b-84ad-b23f24ed45b0\") " pod="calico-system/csi-node-driver-pjbzb" Oct 9 00:49:23.652827 kubelet[2818]: E1009 00:49:23.652814 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.652827 kubelet[2818]: W1009 00:49:23.652827 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.652886 kubelet[2818]: E1009 00:49:23.652841 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.652886 kubelet[2818]: I1009 00:49:23.652862 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a439271e-ca34-412b-84ad-b23f24ed45b0-varrun\") pod \"csi-node-driver-pjbzb\" (UID: \"a439271e-ca34-412b-84ad-b23f24ed45b0\") " pod="calico-system/csi-node-driver-pjbzb" Oct 9 00:49:23.653003 kubelet[2818]: E1009 00:49:23.652990 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.653003 kubelet[2818]: W1009 00:49:23.653001 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.653119 kubelet[2818]: E1009 00:49:23.653019 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.653119 kubelet[2818]: I1009 00:49:23.653036 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a439271e-ca34-412b-84ad-b23f24ed45b0-registration-dir\") pod \"csi-node-driver-pjbzb\" (UID: \"a439271e-ca34-412b-84ad-b23f24ed45b0\") " pod="calico-system/csi-node-driver-pjbzb" Oct 9 00:49:23.653230 kubelet[2818]: E1009 00:49:23.653219 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.653230 kubelet[2818]: W1009 00:49:23.653230 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.653283 kubelet[2818]: E1009 00:49:23.653244 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.653283 kubelet[2818]: I1009 00:49:23.653262 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a439271e-ca34-412b-84ad-b23f24ed45b0-socket-dir\") pod \"csi-node-driver-pjbzb\" (UID: \"a439271e-ca34-412b-84ad-b23f24ed45b0\") " pod="calico-system/csi-node-driver-pjbzb" Oct 9 00:49:23.653471 kubelet[2818]: E1009 00:49:23.653457 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.653471 kubelet[2818]: W1009 00:49:23.653470 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.653538 kubelet[2818]: E1009 00:49:23.653484 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.653538 kubelet[2818]: I1009 00:49:23.653500 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b69nt\" (UniqueName: \"kubernetes.io/projected/a439271e-ca34-412b-84ad-b23f24ed45b0-kube-api-access-b69nt\") pod \"csi-node-driver-pjbzb\" (UID: \"a439271e-ca34-412b-84ad-b23f24ed45b0\") " pod="calico-system/csi-node-driver-pjbzb" Oct 9 00:49:23.654757 kubelet[2818]: E1009 00:49:23.654735 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.654757 kubelet[2818]: W1009 00:49:23.654751 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.654860 kubelet[2818]: E1009 00:49:23.654770 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.654962 kubelet[2818]: E1009 00:49:23.654923 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.654962 kubelet[2818]: W1009 00:49:23.654931 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.655425 kubelet[2818]: E1009 00:49:23.655105 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.655425 kubelet[2818]: W1009 00:49:23.655113 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.655425 kubelet[2818]: E1009 00:49:23.655235 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.655425 kubelet[2818]: W1009 00:49:23.655241 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.655425 kubelet[2818]: E1009 00:49:23.655424 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.655532 kubelet[2818]: W1009 00:49:23.655433 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.656077 kubelet[2818]: E1009 00:49:23.655628 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.656077 kubelet[2818]: W1009 00:49:23.655638 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.656077 kubelet[2818]: E1009 00:49:23.655648 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.656077 kubelet[2818]: E1009 00:49:23.655657 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.656077 kubelet[2818]: E1009 00:49:23.655663 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.656077 kubelet[2818]: E1009 00:49:23.655670 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.656077 kubelet[2818]: E1009 00:49:23.656079 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.657337 kubelet[2818]: E1009 00:49:23.656278 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.657337 kubelet[2818]: W1009 00:49:23.656287 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.657337 kubelet[2818]: E1009 00:49:23.656298 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.657337 kubelet[2818]: E1009 00:49:23.656451 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.657337 kubelet[2818]: W1009 00:49:23.656459 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.657337 kubelet[2818]: E1009 00:49:23.656468 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.657337 kubelet[2818]: E1009 00:49:23.656627 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.657337 kubelet[2818]: W1009 00:49:23.656636 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.657337 kubelet[2818]: E1009 00:49:23.656653 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.657337 kubelet[2818]: E1009 00:49:23.656829 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.657629 kubelet[2818]: W1009 00:49:23.656836 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.657629 kubelet[2818]: E1009 00:49:23.656845 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.693547 kubelet[2818]: E1009 00:49:23.693520 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:23.694233 containerd[1604]: time="2024-10-09T00:49:23.694184843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58f6cbfcf4-9bvrg,Uid:a221e4d1-1d3e-449b-b5bf-140c2b9eff3d,Namespace:calico-system,Attempt:0,}" Oct 9 00:49:23.743033 kubelet[2818]: E1009 00:49:23.743004 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:23.743914 containerd[1604]: time="2024-10-09T00:49:23.743883540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-752sq,Uid:83df4c75-d162-435a-8a53-e773a28c0ff2,Namespace:calico-system,Attempt:0,}" Oct 9 00:49:23.754085 kubelet[2818]: E1009 00:49:23.753945 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.754085 kubelet[2818]: W1009 00:49:23.753963 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.754085 kubelet[2818]: E1009 00:49:23.753982 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.754426 kubelet[2818]: E1009 00:49:23.754410 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.754501 kubelet[2818]: W1009 00:49:23.754484 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.754709 kubelet[2818]: E1009 00:49:23.754573 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.754813 kubelet[2818]: E1009 00:49:23.754802 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.754862 kubelet[2818]: W1009 00:49:23.754852 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.754948 kubelet[2818]: E1009 00:49:23.754939 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.755580 kubelet[2818]: E1009 00:49:23.755361 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.755580 kubelet[2818]: W1009 00:49:23.755577 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.756082 kubelet[2818]: E1009 00:49:23.755941 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.756506 kubelet[2818]: E1009 00:49:23.756486 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.756506 kubelet[2818]: W1009 00:49:23.756504 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.756577 kubelet[2818]: E1009 00:49:23.756526 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.758080 kubelet[2818]: E1009 00:49:23.757745 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.758080 kubelet[2818]: W1009 00:49:23.757759 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.758080 kubelet[2818]: E1009 00:49:23.757815 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.758080 kubelet[2818]: E1009 00:49:23.757950 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.758080 kubelet[2818]: W1009 00:49:23.757957 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.758080 kubelet[2818]: E1009 00:49:23.757998 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.758251 kubelet[2818]: E1009 00:49:23.758192 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.758251 kubelet[2818]: W1009 00:49:23.758201 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.758251 kubelet[2818]: E1009 00:49:23.758231 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.758555 kubelet[2818]: E1009 00:49:23.758507 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.758555 kubelet[2818]: W1009 00:49:23.758517 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.758632 kubelet[2818]: E1009 00:49:23.758608 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.761850 kubelet[2818]: E1009 00:49:23.758753 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.761850 kubelet[2818]: W1009 00:49:23.758765 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.761850 kubelet[2818]: E1009 00:49:23.758780 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.761850 kubelet[2818]: E1009 00:49:23.758955 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.761850 kubelet[2818]: W1009 00:49:23.758970 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.761850 kubelet[2818]: E1009 00:49:23.758981 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.761850 kubelet[2818]: E1009 00:49:23.759406 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.761850 kubelet[2818]: W1009 00:49:23.759416 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.761850 kubelet[2818]: E1009 00:49:23.759429 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.761850 kubelet[2818]: E1009 00:49:23.759744 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.762173 kubelet[2818]: W1009 00:49:23.759763 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.762173 kubelet[2818]: E1009 00:49:23.759809 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.762915 kubelet[2818]: E1009 00:49:23.762888 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.762915 kubelet[2818]: W1009 00:49:23.762909 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.763822 kubelet[2818]: E1009 00:49:23.762977 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.763822 kubelet[2818]: E1009 00:49:23.763424 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.763822 kubelet[2818]: W1009 00:49:23.763437 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.763822 kubelet[2818]: E1009 00:49:23.763558 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.763822 kubelet[2818]: E1009 00:49:23.763729 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.763822 kubelet[2818]: W1009 00:49:23.763738 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.763822 kubelet[2818]: E1009 00:49:23.763783 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.763982 kubelet[2818]: E1009 00:49:23.763951 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.763982 kubelet[2818]: W1009 00:49:23.763960 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.764024 kubelet[2818]: E1009 00:49:23.764016 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.764191 kubelet[2818]: E1009 00:49:23.764170 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.764191 kubelet[2818]: W1009 00:49:23.764181 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.764254 kubelet[2818]: E1009 00:49:23.764197 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.764382 kubelet[2818]: E1009 00:49:23.764363 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.764416 kubelet[2818]: W1009 00:49:23.764374 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.764416 kubelet[2818]: E1009 00:49:23.764404 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.764610 kubelet[2818]: E1009 00:49:23.764590 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.764610 kubelet[2818]: W1009 00:49:23.764607 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.764696 kubelet[2818]: E1009 00:49:23.764626 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.764988 kubelet[2818]: E1009 00:49:23.764960 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.764988 kubelet[2818]: W1009 00:49:23.764981 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.765084 kubelet[2818]: E1009 00:49:23.764998 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.765443 kubelet[2818]: E1009 00:49:23.765419 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.765443 kubelet[2818]: W1009 00:49:23.765434 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.765602 kubelet[2818]: E1009 00:49:23.765450 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.765867 kubelet[2818]: E1009 00:49:23.765846 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.765867 kubelet[2818]: W1009 00:49:23.765860 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.765944 kubelet[2818]: E1009 00:49:23.765881 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.766404 kubelet[2818]: E1009 00:49:23.766372 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.766404 kubelet[2818]: W1009 00:49:23.766390 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.766650 kubelet[2818]: E1009 00:49:23.766425 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.766894 kubelet[2818]: E1009 00:49:23.766875 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.766894 kubelet[2818]: W1009 00:49:23.766890 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.767155 kubelet[2818]: E1009 00:49:23.767139 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.772966 kubelet[2818]: E1009 00:49:23.772944 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 00:49:23.772966 kubelet[2818]: W1009 00:49:23.772963 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 00:49:23.773119 kubelet[2818]: E1009 00:49:23.772979 2818 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 00:49:23.779712 containerd[1604]: time="2024-10-09T00:49:23.777854235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:49:23.779712 containerd[1604]: time="2024-10-09T00:49:23.779138731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:49:23.779712 containerd[1604]: time="2024-10-09T00:49:23.779160820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:23.779712 containerd[1604]: time="2024-10-09T00:49:23.779267505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:23.788002 containerd[1604]: time="2024-10-09T00:49:23.787627873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:49:23.788002 containerd[1604]: time="2024-10-09T00:49:23.787718671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:49:23.788002 containerd[1604]: time="2024-10-09T00:49:23.787732117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:23.788002 containerd[1604]: time="2024-10-09T00:49:23.787881139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:23.821846 containerd[1604]: time="2024-10-09T00:49:23.821797851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-752sq,Uid:83df4c75-d162-435a-8a53-e773a28c0ff2,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb2ef3f79febb97adb2b1bec159d4daea97d9cdabb54753d12ca2430ec1cea3a\"" Oct 9 00:49:23.823443 kubelet[2818]: E1009 00:49:23.823164 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:23.825204 containerd[1604]: time="2024-10-09T00:49:23.825178861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 00:49:23.843136 containerd[1604]: time="2024-10-09T00:49:23.843079210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58f6cbfcf4-9bvrg,Uid:a221e4d1-1d3e-449b-b5bf-140c2b9eff3d,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2a33fcdbbbfacf49c32e7e58ed7387cd5cfa5300a1ef7ef47835546f3fb0aee\"" Oct 9 00:49:23.843835 kubelet[2818]: E1009 00:49:23.843803 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:24.789945 containerd[1604]: time="2024-10-09T00:49:24.789893756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:24.790469 containerd[1604]: time="2024-10-09T00:49:24.790422367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 9 00:49:24.791282 containerd[1604]: time="2024-10-09T00:49:24.791253739Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:24.793047 containerd[1604]: time="2024-10-09T00:49:24.793005278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:24.794230 containerd[1604]: time="2024-10-09T00:49:24.793724125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 968.51197ms" Oct 9 00:49:24.794230 containerd[1604]: time="2024-10-09T00:49:24.793756938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 9 00:49:24.795273 containerd[1604]: time="2024-10-09T00:49:24.795245051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 00:49:24.796194 containerd[1604]: time="2024-10-09T00:49:24.796039168Z" level=info msg="CreateContainer within sandbox \"cb2ef3f79febb97adb2b1bec159d4daea97d9cdabb54753d12ca2430ec1cea3a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 00:49:24.808886 containerd[1604]: time="2024-10-09T00:49:24.808845878Z" level=info msg="CreateContainer within sandbox \"cb2ef3f79febb97adb2b1bec159d4daea97d9cdabb54753d12ca2430ec1cea3a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"166c9ad647a5f60a3ccc4d69f9eece0fccfd4d9c0099ebaf1e3d68aed50144c7\"" Oct 9 00:49:24.810672 containerd[1604]: time="2024-10-09T00:49:24.809273889Z" level=info msg="StartContainer for \"166c9ad647a5f60a3ccc4d69f9eece0fccfd4d9c0099ebaf1e3d68aed50144c7\"" Oct 9 00:49:24.816356 kubelet[2818]: E1009 00:49:24.816324 2818 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjbzb" podUID="a439271e-ca34-412b-84ad-b23f24ed45b0" Oct 9 00:49:24.860921 containerd[1604]: time="2024-10-09T00:49:24.860880759Z" level=info msg="StartContainer for \"166c9ad647a5f60a3ccc4d69f9eece0fccfd4d9c0099ebaf1e3d68aed50144c7\" returns successfully" Oct 9 00:49:24.872911 kubelet[2818]: E1009 00:49:24.872885 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:24.979878 containerd[1604]: time="2024-10-09T00:49:24.958249366Z" level=info msg="shim disconnected" id=166c9ad647a5f60a3ccc4d69f9eece0fccfd4d9c0099ebaf1e3d68aed50144c7 namespace=k8s.io Oct 9 00:49:24.979878 containerd[1604]: time="2024-10-09T00:49:24.979835379Z" level=warning msg="cleaning up after shim disconnected" id=166c9ad647a5f60a3ccc4d69f9eece0fccfd4d9c0099ebaf1e3d68aed50144c7 namespace=k8s.io Oct 9 00:49:24.979878 containerd[1604]: time="2024-10-09T00:49:24.979864390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:49:25.570184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-166c9ad647a5f60a3ccc4d69f9eece0fccfd4d9c0099ebaf1e3d68aed50144c7-rootfs.mount: Deactivated successfully. Oct 9 00:49:25.878092 kubelet[2818]: E1009 00:49:25.875357 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:26.377631 containerd[1604]: time="2024-10-09T00:49:26.377236079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:26.378105 containerd[1604]: time="2024-10-09T00:49:26.377786000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 9 00:49:26.378465 containerd[1604]: time="2024-10-09T00:49:26.378433037Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:26.381449 containerd[1604]: time="2024-10-09T00:49:26.381275756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:26.382141 containerd[1604]: time="2024-10-09T00:49:26.382112182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 1.586835958s" Oct 9 00:49:26.382141 containerd[1604]: time="2024-10-09T00:49:26.382138232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 9 00:49:26.383210 containerd[1604]: time="2024-10-09T00:49:26.383070373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 00:49:26.391556 containerd[1604]: time="2024-10-09T00:49:26.391528627Z" level=info msg="CreateContainer within sandbox \"d2a33fcdbbbfacf49c32e7e58ed7387cd5cfa5300a1ef7ef47835546f3fb0aee\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 00:49:26.402477 containerd[1604]: time="2024-10-09T00:49:26.402437737Z" level=info msg="CreateContainer within sandbox \"d2a33fcdbbbfacf49c32e7e58ed7387cd5cfa5300a1ef7ef47835546f3fb0aee\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"864465a9814594201087a3543617ed2948f5d3038db759e8ce4467b52d894582\"" Oct 9 00:49:26.403185 containerd[1604]: time="2024-10-09T00:49:26.403159641Z" level=info msg="StartContainer for \"864465a9814594201087a3543617ed2948f5d3038db759e8ce4467b52d894582\"" Oct 9 00:49:26.466856 containerd[1604]: time="2024-10-09T00:49:26.466795797Z" level=info msg="StartContainer for \"864465a9814594201087a3543617ed2948f5d3038db759e8ce4467b52d894582\" returns successfully" Oct 9 00:49:26.816993 kubelet[2818]: E1009 00:49:26.816870 2818 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjbzb" podUID="a439271e-ca34-412b-84ad-b23f24ed45b0" Oct 9 00:49:26.879619 kubelet[2818]: E1009 00:49:26.879591 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:27.883928 kubelet[2818]: I1009 00:49:27.883892 2818 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 00:49:27.884797 kubelet[2818]: E1009 00:49:27.884508 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:28.816623 kubelet[2818]: E1009 00:49:28.816548 2818 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjbzb" podUID="a439271e-ca34-412b-84ad-b23f24ed45b0" Oct 9 00:49:29.242572 containerd[1604]: time="2024-10-09T00:49:29.242520388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:29.243548 containerd[1604]: time="2024-10-09T00:49:29.243084050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 9 00:49:29.244310 containerd[1604]: time="2024-10-09T00:49:29.244268473Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:29.247257 containerd[1604]: time="2024-10-09T00:49:29.247181215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:29.248174 containerd[1604]: time="2024-10-09T00:49:29.248144246Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 2.864965554s" Oct 9 00:49:29.248239 containerd[1604]: time="2024-10-09T00:49:29.248179297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 9 00:49:29.249885 containerd[1604]: time="2024-10-09T00:49:29.249855239Z" level=info msg="CreateContainer within sandbox \"cb2ef3f79febb97adb2b1bec159d4daea97d9cdabb54753d12ca2430ec1cea3a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 00:49:29.266405 containerd[1604]: time="2024-10-09T00:49:29.266341010Z" level=info msg="CreateContainer within sandbox \"cb2ef3f79febb97adb2b1bec159d4daea97d9cdabb54753d12ca2430ec1cea3a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"54c9ad1c30ea40fc057254f00a44bf7449bcb3e05cc99d6d269815d49839679c\"" Oct 9 00:49:29.267925 containerd[1604]: time="2024-10-09T00:49:29.266974975Z" level=info msg="StartContainer for \"54c9ad1c30ea40fc057254f00a44bf7449bcb3e05cc99d6d269815d49839679c\"" Oct 9 00:49:29.314094 containerd[1604]: time="2024-10-09T00:49:29.314023508Z" level=info msg="StartContainer for \"54c9ad1c30ea40fc057254f00a44bf7449bcb3e05cc99d6d269815d49839679c\" returns successfully" Oct 9 00:49:29.894167 kubelet[2818]: E1009 00:49:29.894121 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:29.907206 kubelet[2818]: I1009 00:49:29.905591 2818 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-58f6cbfcf4-9bvrg" podStartSLOduration=4.367658175 podStartE2EDuration="6.905551975s" podCreationTimestamp="2024-10-09 00:49:23 +0000 UTC" firstStartedPulling="2024-10-09 00:49:23.844547263 +0000 UTC m=+22.132171658" lastFinishedPulling="2024-10-09 00:49:26.382441063 +0000 UTC m=+24.670065458" observedRunningTime="2024-10-09 00:49:26.889032959 +0000 UTC m=+25.176657354" watchObservedRunningTime="2024-10-09 00:49:29.905551975 +0000 UTC m=+28.193176370" Oct 9 00:49:30.004766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54c9ad1c30ea40fc057254f00a44bf7449bcb3e05cc99d6d269815d49839679c-rootfs.mount: Deactivated successfully. Oct 9 00:49:30.008326 containerd[1604]: time="2024-10-09T00:49:30.008276300Z" level=info msg="shim disconnected" id=54c9ad1c30ea40fc057254f00a44bf7449bcb3e05cc99d6d269815d49839679c namespace=k8s.io Oct 9 00:49:30.008326 containerd[1604]: time="2024-10-09T00:49:30.008327236Z" level=warning msg="cleaning up after shim disconnected" id=54c9ad1c30ea40fc057254f00a44bf7449bcb3e05cc99d6d269815d49839679c namespace=k8s.io Oct 9 00:49:30.008458 containerd[1604]: time="2024-10-09T00:49:30.008335839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:49:30.029683 kubelet[2818]: I1009 00:49:30.029642 2818 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 00:49:30.052523 kubelet[2818]: I1009 00:49:30.052472 2818 topology_manager.go:215] "Topology Admit Handler" podUID="faba5546-81fe-4ddf-8df9-993b0e47da47" podNamespace="calico-system" podName="calico-kube-controllers-798fcc5cf9-ncvpp" Oct 9 00:49:30.054998 kubelet[2818]: I1009 00:49:30.054960 2818 topology_manager.go:215] "Topology Admit Handler" podUID="33d01ef4-50fd-4adb-84dd-990d0fff876a" podNamespace="kube-system" podName="coredns-76f75df574-rtgth" Oct 9 00:49:30.055178 kubelet[2818]: I1009 00:49:30.055159 2818 topology_manager.go:215] "Topology Admit Handler" podUID="84dbcea1-dd9f-40be-9404-2879458c14d6" podNamespace="kube-system" podName="coredns-76f75df574-n4fv5" Oct 9 00:49:30.219846 kubelet[2818]: I1009 00:49:30.219703 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84dbcea1-dd9f-40be-9404-2879458c14d6-config-volume\") pod \"coredns-76f75df574-n4fv5\" (UID: \"84dbcea1-dd9f-40be-9404-2879458c14d6\") " pod="kube-system/coredns-76f75df574-n4fv5" Oct 9 00:49:30.219846 kubelet[2818]: I1009 00:49:30.219762 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc2nv\" (UniqueName: \"kubernetes.io/projected/33d01ef4-50fd-4adb-84dd-990d0fff876a-kube-api-access-mc2nv\") pod \"coredns-76f75df574-rtgth\" (UID: \"33d01ef4-50fd-4adb-84dd-990d0fff876a\") " pod="kube-system/coredns-76f75df574-rtgth" Oct 9 00:49:30.219986 kubelet[2818]: I1009 00:49:30.219845 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jf7r\" (UniqueName: \"kubernetes.io/projected/faba5546-81fe-4ddf-8df9-993b0e47da47-kube-api-access-4jf7r\") pod \"calico-kube-controllers-798fcc5cf9-ncvpp\" (UID: \"faba5546-81fe-4ddf-8df9-993b0e47da47\") " pod="calico-system/calico-kube-controllers-798fcc5cf9-ncvpp" Oct 9 00:49:30.219986 kubelet[2818]: I1009 00:49:30.219887 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkrn7\" (UniqueName: \"kubernetes.io/projected/84dbcea1-dd9f-40be-9404-2879458c14d6-kube-api-access-mkrn7\") pod \"coredns-76f75df574-n4fv5\" (UID: \"84dbcea1-dd9f-40be-9404-2879458c14d6\") " pod="kube-system/coredns-76f75df574-n4fv5" Oct 9 00:49:30.219986 kubelet[2818]: I1009 00:49:30.219915 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/faba5546-81fe-4ddf-8df9-993b0e47da47-tigera-ca-bundle\") pod \"calico-kube-controllers-798fcc5cf9-ncvpp\" (UID: \"faba5546-81fe-4ddf-8df9-993b0e47da47\") " pod="calico-system/calico-kube-controllers-798fcc5cf9-ncvpp" Oct 9 00:49:30.219986 kubelet[2818]: I1009 00:49:30.219939 2818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33d01ef4-50fd-4adb-84dd-990d0fff876a-config-volume\") pod \"coredns-76f75df574-rtgth\" (UID: \"33d01ef4-50fd-4adb-84dd-990d0fff876a\") " pod="kube-system/coredns-76f75df574-rtgth" Oct 9 00:49:30.355702 containerd[1604]: time="2024-10-09T00:49:30.355660033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798fcc5cf9-ncvpp,Uid:faba5546-81fe-4ddf-8df9-993b0e47da47,Namespace:calico-system,Attempt:0,}" Oct 9 00:49:30.358108 kubelet[2818]: E1009 00:49:30.358069 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:30.358517 containerd[1604]: time="2024-10-09T00:49:30.358405766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rtgth,Uid:33d01ef4-50fd-4adb-84dd-990d0fff876a,Namespace:kube-system,Attempt:0,}" Oct 9 00:49:30.361062 kubelet[2818]: E1009 00:49:30.360961 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:30.361320 containerd[1604]: time="2024-10-09T00:49:30.361289503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n4fv5,Uid:84dbcea1-dd9f-40be-9404-2879458c14d6,Namespace:kube-system,Attempt:0,}" Oct 9 00:49:30.587303 containerd[1604]: time="2024-10-09T00:49:30.587178899Z" level=error msg="Failed to destroy network for sandbox \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.587827 containerd[1604]: time="2024-10-09T00:49:30.587795811Z" level=error msg="encountered an error cleaning up failed sandbox \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.588013 containerd[1604]: time="2024-10-09T00:49:30.587882838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rtgth,Uid:33d01ef4-50fd-4adb-84dd-990d0fff876a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.590424 containerd[1604]: time="2024-10-09T00:49:30.590318635Z" level=error msg="Failed to destroy network for sandbox \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.590818 containerd[1604]: time="2024-10-09T00:49:30.590792703Z" level=error msg="encountered an error cleaning up failed sandbox \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.591013 containerd[1604]: time="2024-10-09T00:49:30.590941269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798fcc5cf9-ncvpp,Uid:faba5546-81fe-4ddf-8df9-993b0e47da47,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.592038 kubelet[2818]: E1009 00:49:30.591992 2818 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.592105 kubelet[2818]: E1009 00:49:30.592070 2818 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-798fcc5cf9-ncvpp" Oct 9 00:49:30.592105 kubelet[2818]: E1009 00:49:30.592091 2818 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-798fcc5cf9-ncvpp" Oct 9 00:49:30.592155 kubelet[2818]: E1009 00:49:30.592146 2818 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-798fcc5cf9-ncvpp_calico-system(faba5546-81fe-4ddf-8df9-993b0e47da47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-798fcc5cf9-ncvpp_calico-system(faba5546-81fe-4ddf-8df9-993b0e47da47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-798fcc5cf9-ncvpp" podUID="faba5546-81fe-4ddf-8df9-993b0e47da47" Oct 9 00:49:30.592268 kubelet[2818]: E1009 00:49:30.592223 2818 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.592306 kubelet[2818]: E1009 00:49:30.592283 2818 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rtgth" Oct 9 00:49:30.592306 kubelet[2818]: E1009 00:49:30.592304 2818 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rtgth" Oct 9 00:49:30.592527 kubelet[2818]: E1009 00:49:30.592500 2818 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-rtgth_kube-system(33d01ef4-50fd-4adb-84dd-990d0fff876a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-rtgth_kube-system(33d01ef4-50fd-4adb-84dd-990d0fff876a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rtgth" podUID="33d01ef4-50fd-4adb-84dd-990d0fff876a" Oct 9 00:49:30.596136 containerd[1604]: time="2024-10-09T00:49:30.596104874Z" level=error msg="Failed to destroy network for sandbox \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.596403 containerd[1604]: time="2024-10-09T00:49:30.596367916Z" level=error msg="encountered an error cleaning up failed sandbox \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.596452 containerd[1604]: time="2024-10-09T00:49:30.596409409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n4fv5,Uid:84dbcea1-dd9f-40be-9404-2879458c14d6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.596592 kubelet[2818]: E1009 00:49:30.596564 2818 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.596632 kubelet[2818]: E1009 00:49:30.596608 2818 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-n4fv5" Oct 9 00:49:30.596632 kubelet[2818]: E1009 00:49:30.596626 2818 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-n4fv5" Oct 9 00:49:30.596675 kubelet[2818]: E1009 00:49:30.596662 2818 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-n4fv5_kube-system(84dbcea1-dd9f-40be-9404-2879458c14d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-n4fv5_kube-system(84dbcea1-dd9f-40be-9404-2879458c14d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-n4fv5" podUID="84dbcea1-dd9f-40be-9404-2879458c14d6" Oct 9 00:49:30.818056 containerd[1604]: time="2024-10-09T00:49:30.818005230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pjbzb,Uid:a439271e-ca34-412b-84ad-b23f24ed45b0,Namespace:calico-system,Attempt:0,}" Oct 9 00:49:30.863685 containerd[1604]: time="2024-10-09T00:49:30.863644100Z" level=error msg="Failed to destroy network for sandbox \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.864104 containerd[1604]: time="2024-10-09T00:49:30.864080356Z" level=error msg="encountered an error cleaning up failed sandbox \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.864231 containerd[1604]: time="2024-10-09T00:49:30.864210477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pjbzb,Uid:a439271e-ca34-412b-84ad-b23f24ed45b0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.864532 kubelet[2818]: E1009 00:49:30.864492 2818 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.864588 kubelet[2818]: E1009 00:49:30.864556 2818 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pjbzb" Oct 9 00:49:30.864588 kubelet[2818]: E1009 00:49:30.864576 2818 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pjbzb" Oct 9 00:49:30.864646 kubelet[2818]: E1009 00:49:30.864631 2818 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pjbzb_calico-system(a439271e-ca34-412b-84ad-b23f24ed45b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pjbzb_calico-system(a439271e-ca34-412b-84ad-b23f24ed45b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pjbzb" podUID="a439271e-ca34-412b-84ad-b23f24ed45b0" Oct 9 00:49:30.892660 kubelet[2818]: I1009 00:49:30.892629 2818 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Oct 9 00:49:30.894064 containerd[1604]: time="2024-10-09T00:49:30.893369983Z" level=info msg="StopPodSandbox for \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\"" Oct 9 00:49:30.895076 kubelet[2818]: I1009 00:49:30.894834 2818 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Oct 9 00:49:30.895429 containerd[1604]: time="2024-10-09T00:49:30.895318709Z" level=info msg="StopPodSandbox for \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\"" Oct 9 00:49:30.898211 kubelet[2818]: I1009 00:49:30.898189 2818 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Oct 9 00:49:30.898656 containerd[1604]: time="2024-10-09T00:49:30.898608172Z" level=info msg="StopPodSandbox for \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\"" Oct 9 00:49:30.901316 containerd[1604]: time="2024-10-09T00:49:30.900858432Z" level=info msg="Ensure that sandbox 86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d in task-service has been cleanup successfully" Oct 9 00:49:30.901316 containerd[1604]: time="2024-10-09T00:49:30.901087343Z" level=info msg="Ensure that sandbox 9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb in task-service has been cleanup successfully" Oct 9 00:49:30.901523 containerd[1604]: time="2024-10-09T00:49:30.901473183Z" level=info msg="Ensure that sandbox 4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2 in task-service has been cleanup successfully" Oct 9 00:49:30.906264 kubelet[2818]: E1009 00:49:30.906228 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:30.908536 containerd[1604]: time="2024-10-09T00:49:30.908482842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 00:49:30.910353 kubelet[2818]: I1009 00:49:30.910327 2818 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Oct 9 00:49:30.913477 containerd[1604]: time="2024-10-09T00:49:30.913442544Z" level=info msg="StopPodSandbox for \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\"" Oct 9 00:49:30.913678 containerd[1604]: time="2024-10-09T00:49:30.913649049Z" level=info msg="Ensure that sandbox 2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976 in task-service has been cleanup successfully" Oct 9 00:49:30.943625 containerd[1604]: time="2024-10-09T00:49:30.943567151Z" level=error msg="StopPodSandbox for \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\" failed" error="failed to destroy network for sandbox \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.944647 containerd[1604]: time="2024-10-09T00:49:30.944600152Z" level=error msg="StopPodSandbox for \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\" failed" error="failed to destroy network for sandbox \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.947530 kubelet[2818]: E1009 00:49:30.947455 2818 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Oct 9 00:49:30.947639 kubelet[2818]: E1009 00:49:30.947546 2818 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb"} Oct 9 00:49:30.947639 kubelet[2818]: E1009 00:49:30.947591 2818 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a439271e-ca34-412b-84ad-b23f24ed45b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 00:49:30.947639 kubelet[2818]: E1009 00:49:30.947621 2818 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a439271e-ca34-412b-84ad-b23f24ed45b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pjbzb" podUID="a439271e-ca34-412b-84ad-b23f24ed45b0" Oct 9 00:49:30.947967 kubelet[2818]: E1009 00:49:30.947926 2818 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Oct 9 00:49:30.948012 kubelet[2818]: E1009 00:49:30.947980 2818 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976"} Oct 9 00:49:30.948036 kubelet[2818]: E1009 00:49:30.948026 2818 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"84dbcea1-dd9f-40be-9404-2879458c14d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 00:49:30.948089 kubelet[2818]: E1009 00:49:30.948062 2818 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"84dbcea1-dd9f-40be-9404-2879458c14d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-n4fv5" podUID="84dbcea1-dd9f-40be-9404-2879458c14d6" Oct 9 00:49:30.953476 containerd[1604]: time="2024-10-09T00:49:30.953441501Z" level=error msg="StopPodSandbox for \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\" failed" error="failed to destroy network for sandbox \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.953813 kubelet[2818]: E1009 00:49:30.953698 2818 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Oct 9 00:49:30.953813 kubelet[2818]: E1009 00:49:30.953734 2818 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d"} Oct 9 00:49:30.953813 kubelet[2818]: E1009 00:49:30.953765 2818 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33d01ef4-50fd-4adb-84dd-990d0fff876a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 00:49:30.953813 kubelet[2818]: E1009 00:49:30.953793 2818 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33d01ef4-50fd-4adb-84dd-990d0fff876a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rtgth" podUID="33d01ef4-50fd-4adb-84dd-990d0fff876a" Oct 9 00:49:30.956644 containerd[1604]: time="2024-10-09T00:49:30.956604845Z" level=error msg="StopPodSandbox for \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\" failed" error="failed to destroy network for sandbox \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 00:49:30.956793 kubelet[2818]: E1009 00:49:30.956769 2818 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Oct 9 00:49:30.956828 kubelet[2818]: E1009 00:49:30.956803 2818 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2"} Oct 9 00:49:30.956862 kubelet[2818]: E1009 00:49:30.956848 2818 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"faba5546-81fe-4ddf-8df9-993b0e47da47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 00:49:30.956904 kubelet[2818]: E1009 00:49:30.956874 2818 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"faba5546-81fe-4ddf-8df9-993b0e47da47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-798fcc5cf9-ncvpp" podUID="faba5546-81fe-4ddf-8df9-993b0e47da47" Oct 9 00:49:31.336893 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d-shm.mount: Deactivated successfully. Oct 9 00:49:31.337068 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2-shm.mount: Deactivated successfully. Oct 9 00:49:32.650522 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:43530.service - OpenSSH per-connection server daemon (10.0.0.1:43530). Oct 9 00:49:32.696296 sshd[3833]: Accepted publickey for core from 10.0.0.1 port 43530 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:49:32.699725 sshd[3833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:49:32.792281 systemd-logind[1577]: New session 8 of user core. Oct 9 00:49:32.804398 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 00:49:32.946727 sshd[3833]: pam_unix(sshd:session): session closed for user core Oct 9 00:49:32.951356 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:43530.service: Deactivated successfully. Oct 9 00:49:32.953874 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Oct 9 00:49:32.954546 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 00:49:32.955693 systemd-logind[1577]: Removed session 8. Oct 9 00:49:34.412594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount973908512.mount: Deactivated successfully. Oct 9 00:49:34.515625 containerd[1604]: time="2024-10-09T00:49:34.515572159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:34.516830 containerd[1604]: time="2024-10-09T00:49:34.516762518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 9 00:49:34.517701 containerd[1604]: time="2024-10-09T00:49:34.517624830Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:34.519632 containerd[1604]: time="2024-10-09T00:49:34.519585757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:34.520277 containerd[1604]: time="2024-10-09T00:49:34.520250735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 3.611707194s" Oct 9 00:49:34.520336 containerd[1604]: time="2024-10-09T00:49:34.520286425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 9 00:49:34.531704 containerd[1604]: time="2024-10-09T00:49:34.531663560Z" level=info msg="CreateContainer within sandbox \"cb2ef3f79febb97adb2b1bec159d4daea97d9cdabb54753d12ca2430ec1cea3a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 00:49:34.545671 containerd[1604]: time="2024-10-09T00:49:34.545615867Z" level=info msg="CreateContainer within sandbox \"cb2ef3f79febb97adb2b1bec159d4daea97d9cdabb54753d12ca2430ec1cea3a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2aa233ca45ebd964c7efc68c9eee3d2c7dda605185e8081d37146fbc14c352c6\"" Oct 9 00:49:34.547097 containerd[1604]: time="2024-10-09T00:49:34.547064456Z" level=info msg="StartContainer for \"2aa233ca45ebd964c7efc68c9eee3d2c7dda605185e8081d37146fbc14c352c6\"" Oct 9 00:49:34.742067 containerd[1604]: time="2024-10-09T00:49:34.741920588Z" level=info msg="StartContainer for \"2aa233ca45ebd964c7efc68c9eee3d2c7dda605185e8081d37146fbc14c352c6\" returns successfully" Oct 9 00:49:34.800828 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 00:49:34.800942 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 00:49:34.950910 kubelet[2818]: E1009 00:49:34.950871 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:34.981255 kubelet[2818]: I1009 00:49:34.981159 2818 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-752sq" podStartSLOduration=1.284944847 podStartE2EDuration="11.981114266s" podCreationTimestamp="2024-10-09 00:49:23 +0000 UTC" firstStartedPulling="2024-10-09 00:49:23.824571928 +0000 UTC m=+22.112196323" lastFinishedPulling="2024-10-09 00:49:34.520741347 +0000 UTC m=+32.808365742" observedRunningTime="2024-10-09 00:49:34.978219809 +0000 UTC m=+33.265844204" watchObservedRunningTime="2024-10-09 00:49:34.981114266 +0000 UTC m=+33.268738662" Oct 9 00:49:35.951775 kubelet[2818]: I1009 00:49:35.951743 2818 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 00:49:35.952849 kubelet[2818]: E1009 00:49:35.952511 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:36.957785 kubelet[2818]: E1009 00:49:36.957749 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:37.963353 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:43536.service - OpenSSH per-connection server daemon (10.0.0.1:43536). Oct 9 00:49:37.999454 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 43536 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:49:38.000950 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:49:38.005103 systemd-logind[1577]: New session 9 of user core. Oct 9 00:49:38.016424 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 00:49:38.138161 sshd[4118]: pam_unix(sshd:session): session closed for user core Oct 9 00:49:38.141378 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:43536.service: Deactivated successfully. Oct 9 00:49:38.143456 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Oct 9 00:49:38.143483 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 00:49:38.144971 systemd-logind[1577]: Removed session 9. Oct 9 00:49:39.979864 kubelet[2818]: I1009 00:49:39.979806 2818 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 00:49:39.980590 kubelet[2818]: E1009 00:49:39.980574 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:40.445119 kernel: bpftool[4232]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 00:49:40.614333 systemd-networkd[1240]: vxlan.calico: Link UP Oct 9 00:49:40.614342 systemd-networkd[1240]: vxlan.calico: Gained carrier Oct 9 00:49:40.961308 kubelet[2818]: E1009 00:49:40.961269 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:41.817780 containerd[1604]: time="2024-10-09T00:49:41.817554636Z" level=info msg="StopPodSandbox for \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\"" Oct 9 00:49:41.817780 containerd[1604]: time="2024-10-09T00:49:41.817699347Z" level=info msg="StopPodSandbox for \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\"" Oct 9 00:49:41.818675 containerd[1604]: time="2024-10-09T00:49:41.818243105Z" level=info msg="StopPodSandbox for \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\"" Oct 9 00:49:41.818675 containerd[1604]: time="2024-10-09T00:49:41.818510642Z" level=info msg="StopPodSandbox for \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\"" Oct 9 00:49:42.132416 containerd[1604]: 2024-10-09 00:49:41.985 [INFO][4381] k8s.go 608: Cleaning up netns ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Oct 9 00:49:42.132416 containerd[1604]: 2024-10-09 00:49:41.986 [INFO][4381] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" iface="eth0" netns="/var/run/netns/cni-d1469280-ff9d-8e20-e90e-13e6d56a055f" Oct 9 00:49:42.132416 containerd[1604]: 2024-10-09 00:49:41.986 [INFO][4381] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" iface="eth0" netns="/var/run/netns/cni-d1469280-ff9d-8e20-e90e-13e6d56a055f" Oct 9 00:49:42.132416 containerd[1604]: 2024-10-09 00:49:41.986 [INFO][4381] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" iface="eth0" netns="/var/run/netns/cni-d1469280-ff9d-8e20-e90e-13e6d56a055f" Oct 9 00:49:42.132416 containerd[1604]: 2024-10-09 00:49:41.986 [INFO][4381] k8s.go 615: Releasing IP address(es) ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Oct 9 00:49:42.132416 containerd[1604]: 2024-10-09 00:49:41.986 [INFO][4381] utils.go 188: Calico CNI releasing IP address ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Oct 9 00:49:42.132416 containerd[1604]: 2024-10-09 00:49:42.113 [INFO][4416] ipam_plugin.go 417: Releasing address using handleID ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" HandleID="k8s-pod-network.4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Workload="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:49:42.132416 containerd[1604]: 2024-10-09 00:49:42.113 [INFO][4416] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:49:42.132416 containerd[1604]: 2024-10-09 00:49:42.113 [INFO][4416] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:49:42.132416 containerd[1604]: 2024-10-09 00:49:42.127 [WARNING][4416] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" HandleID="k8s-pod-network.4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Workload="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:49:42.132416 containerd[1604]: 2024-10-09 00:49:42.128 [INFO][4416] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" HandleID="k8s-pod-network.4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Workload="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:49:42.132416 containerd[1604]: 2024-10-09 00:49:42.129 [INFO][4416] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:49:42.132416 containerd[1604]: 2024-10-09 00:49:42.130 [INFO][4381] k8s.go 621: Teardown processing complete. ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Oct 9 00:49:42.133751 containerd[1604]: time="2024-10-09T00:49:42.133443996Z" level=info msg="TearDown network for sandbox \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\" successfully" Oct 9 00:49:42.133751 containerd[1604]: time="2024-10-09T00:49:42.133475563Z" level=info msg="StopPodSandbox for \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\" returns successfully" Oct 9 00:49:42.135329 containerd[1604]: time="2024-10-09T00:49:42.135274902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798fcc5cf9-ncvpp,Uid:faba5546-81fe-4ddf-8df9-993b0e47da47,Namespace:calico-system,Attempt:1,}" Oct 9 00:49:42.135588 systemd[1]: run-netns-cni\x2dd1469280\x2dff9d\x2d8e20\x2de90e\x2d13e6d56a055f.mount: Deactivated successfully. Oct 9 00:49:42.146088 containerd[1604]: 2024-10-09 00:49:42.000 [INFO][4386] k8s.go 608: Cleaning up netns ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Oct 9 00:49:42.146088 containerd[1604]: 2024-10-09 00:49:42.000 [INFO][4386] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" iface="eth0" netns="/var/run/netns/cni-f3efe7f7-c4e2-f42a-5c45-e579b0530a64" Oct 9 00:49:42.146088 containerd[1604]: 2024-10-09 00:49:42.001 [INFO][4386] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" iface="eth0" netns="/var/run/netns/cni-f3efe7f7-c4e2-f42a-5c45-e579b0530a64" Oct 9 00:49:42.146088 containerd[1604]: 2024-10-09 00:49:42.001 [INFO][4386] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" iface="eth0" netns="/var/run/netns/cni-f3efe7f7-c4e2-f42a-5c45-e579b0530a64" Oct 9 00:49:42.146088 containerd[1604]: 2024-10-09 00:49:42.001 [INFO][4386] k8s.go 615: Releasing IP address(es) ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Oct 9 00:49:42.146088 containerd[1604]: 2024-10-09 00:49:42.001 [INFO][4386] utils.go 188: Calico CNI releasing IP address ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Oct 9 00:49:42.146088 containerd[1604]: 2024-10-09 00:49:42.113 [INFO][4426] ipam_plugin.go 417: Releasing address using handleID ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" HandleID="k8s-pod-network.2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Workload="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:49:42.146088 containerd[1604]: 2024-10-09 00:49:42.113 [INFO][4426] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:49:42.146088 containerd[1604]: 2024-10-09 00:49:42.129 [INFO][4426] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:49:42.146088 containerd[1604]: 2024-10-09 00:49:42.138 [WARNING][4426] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" HandleID="k8s-pod-network.2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Workload="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:49:42.146088 containerd[1604]: 2024-10-09 00:49:42.138 [INFO][4426] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" HandleID="k8s-pod-network.2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Workload="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:49:42.146088 containerd[1604]: 2024-10-09 00:49:42.140 [INFO][4426] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:49:42.146088 containerd[1604]: 2024-10-09 00:49:42.142 [INFO][4386] k8s.go 621: Teardown processing complete. ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Oct 9 00:49:42.147473 containerd[1604]: time="2024-10-09T00:49:42.146255214Z" level=info msg="TearDown network for sandbox \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\" successfully" Oct 9 00:49:42.147473 containerd[1604]: time="2024-10-09T00:49:42.146279899Z" level=info msg="StopPodSandbox for \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\" returns successfully" Oct 9 00:49:42.147473 containerd[1604]: time="2024-10-09T00:49:42.147381571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n4fv5,Uid:84dbcea1-dd9f-40be-9404-2879458c14d6,Namespace:kube-system,Attempt:1,}" Oct 9 00:49:42.147545 kubelet[2818]: E1009 00:49:42.146702 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:42.148948 systemd[1]: run-netns-cni\x2df3efe7f7\x2dc4e2\x2df42a\x2d5c45\x2de579b0530a64.mount: Deactivated successfully. Oct 9 00:49:42.161322 containerd[1604]: 2024-10-09 00:49:41.984 [INFO][4387] k8s.go 608: Cleaning up netns ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Oct 9 00:49:42.161322 containerd[1604]: 2024-10-09 00:49:41.984 [INFO][4387] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" iface="eth0" netns="/var/run/netns/cni-9edf5f2e-296d-e19e-5a4e-b28632dad8e8" Oct 9 00:49:42.161322 containerd[1604]: 2024-10-09 00:49:41.985 [INFO][4387] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" iface="eth0" netns="/var/run/netns/cni-9edf5f2e-296d-e19e-5a4e-b28632dad8e8" Oct 9 00:49:42.161322 containerd[1604]: 2024-10-09 00:49:41.985 [INFO][4387] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" iface="eth0" netns="/var/run/netns/cni-9edf5f2e-296d-e19e-5a4e-b28632dad8e8" Oct 9 00:49:42.161322 containerd[1604]: 2024-10-09 00:49:41.986 [INFO][4387] k8s.go 615: Releasing IP address(es) ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Oct 9 00:49:42.161322 containerd[1604]: 2024-10-09 00:49:41.986 [INFO][4387] utils.go 188: Calico CNI releasing IP address ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Oct 9 00:49:42.161322 containerd[1604]: 2024-10-09 00:49:42.113 [INFO][4414] ipam_plugin.go 417: Releasing address using handleID ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" HandleID="k8s-pod-network.86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Workload="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:49:42.161322 containerd[1604]: 2024-10-09 00:49:42.113 [INFO][4414] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:49:42.161322 containerd[1604]: 2024-10-09 00:49:42.140 [INFO][4414] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:49:42.161322 containerd[1604]: 2024-10-09 00:49:42.154 [WARNING][4414] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" HandleID="k8s-pod-network.86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Workload="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:49:42.161322 containerd[1604]: 2024-10-09 00:49:42.154 [INFO][4414] ipam_plugin.go 445: Releasing address using workloadID ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" HandleID="k8s-pod-network.86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Workload="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:49:42.161322 containerd[1604]: 2024-10-09 00:49:42.155 [INFO][4414] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:49:42.161322 containerd[1604]: 2024-10-09 00:49:42.157 [INFO][4387] k8s.go 621: Teardown processing complete. ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Oct 9 00:49:42.162424 containerd[1604]: time="2024-10-09T00:49:42.161437930Z" level=info msg="TearDown network for sandbox \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\" successfully" Oct 9 00:49:42.162424 containerd[1604]: time="2024-10-09T00:49:42.161461175Z" level=info msg="StopPodSandbox for \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\" returns successfully" Oct 9 00:49:42.162473 kubelet[2818]: E1009 00:49:42.162308 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:42.162929 containerd[1604]: time="2024-10-09T00:49:42.162887876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rtgth,Uid:33d01ef4-50fd-4adb-84dd-990d0fff876a,Namespace:kube-system,Attempt:1,}" Oct 9 00:49:42.167638 systemd[1]: run-netns-cni\x2d9edf5f2e\x2d296d\x2de19e\x2d5a4e\x2db28632dad8e8.mount: Deactivated successfully. Oct 9 00:49:42.178494 containerd[1604]: 2024-10-09 00:49:41.983 [INFO][4388] k8s.go 608: Cleaning up netns ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Oct 9 00:49:42.178494 containerd[1604]: 2024-10-09 00:49:41.984 [INFO][4388] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" iface="eth0" netns="/var/run/netns/cni-c842ee0e-38ec-9ec8-2e17-471860c9b235" Oct 9 00:49:42.178494 containerd[1604]: 2024-10-09 00:49:41.985 [INFO][4388] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" iface="eth0" netns="/var/run/netns/cni-c842ee0e-38ec-9ec8-2e17-471860c9b235" Oct 9 00:49:42.178494 containerd[1604]: 2024-10-09 00:49:41.985 [INFO][4388] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" iface="eth0" netns="/var/run/netns/cni-c842ee0e-38ec-9ec8-2e17-471860c9b235" Oct 9 00:49:42.178494 containerd[1604]: 2024-10-09 00:49:41.985 [INFO][4388] k8s.go 615: Releasing IP address(es) ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Oct 9 00:49:42.178494 containerd[1604]: 2024-10-09 00:49:41.985 [INFO][4388] utils.go 188: Calico CNI releasing IP address ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Oct 9 00:49:42.178494 containerd[1604]: 2024-10-09 00:49:42.113 [INFO][4415] ipam_plugin.go 417: Releasing address using handleID ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" HandleID="k8s-pod-network.9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Workload="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:49:42.178494 containerd[1604]: 2024-10-09 00:49:42.113 [INFO][4415] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:49:42.178494 containerd[1604]: 2024-10-09 00:49:42.155 [INFO][4415] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:49:42.178494 containerd[1604]: 2024-10-09 00:49:42.171 [WARNING][4415] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" HandleID="k8s-pod-network.9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Workload="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:49:42.178494 containerd[1604]: 2024-10-09 00:49:42.171 [INFO][4415] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" HandleID="k8s-pod-network.9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Workload="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:49:42.178494 containerd[1604]: 2024-10-09 00:49:42.174 [INFO][4415] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:49:42.178494 containerd[1604]: 2024-10-09 00:49:42.176 [INFO][4388] k8s.go 621: Teardown processing complete. ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Oct 9 00:49:42.179574 containerd[1604]: time="2024-10-09T00:49:42.179275166Z" level=info msg="TearDown network for sandbox \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\" successfully" Oct 9 00:49:42.179574 containerd[1604]: time="2024-10-09T00:49:42.179565987Z" level=info msg="StopPodSandbox for \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\" returns successfully" Oct 9 00:49:42.180280 containerd[1604]: time="2024-10-09T00:49:42.180233848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pjbzb,Uid:a439271e-ca34-412b-84ad-b23f24ed45b0,Namespace:calico-system,Attempt:1,}" Oct 9 00:49:42.322540 systemd-networkd[1240]: cali4a6e1654186: Link UP Oct 9 00:49:42.322744 systemd-networkd[1240]: cali4a6e1654186: Gained carrier Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.233 [INFO][4459] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--n4fv5-eth0 coredns-76f75df574- kube-system 84dbcea1-dd9f-40be-9404-2879458c14d6 794 0 2024-10-09 00:49:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-n4fv5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4a6e1654186 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" Namespace="kube-system" Pod="coredns-76f75df574-n4fv5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--n4fv5-" Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.234 [INFO][4459] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" Namespace="kube-system" Pod="coredns-76f75df574-n4fv5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.265 [INFO][4509] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" HandleID="k8s-pod-network.4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" Workload="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.276 [INFO][4509] ipam_plugin.go 270: Auto assigning IP ContainerID="4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" HandleID="k8s-pod-network.4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" Workload="localhost-k8s-coredns--76f75df574--n4fv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dc040), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-n4fv5", "timestamp":"2024-10-09 00:49:42.265826789 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.276 [INFO][4509] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.281 [INFO][4509] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.281 [INFO][4509] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.284 [INFO][4509] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" host="localhost" Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.294 [INFO][4509] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.298 [INFO][4509] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.300 [INFO][4509] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.303 [INFO][4509] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.303 [INFO][4509] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" host="localhost" Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.305 [INFO][4509] ipam.go 1685: Creating new handle: k8s-pod-network.4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9 Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.308 [INFO][4509] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" host="localhost" Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.313 [INFO][4509] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" host="localhost" Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.314 [INFO][4509] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" host="localhost" Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.314 [INFO][4509] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:49:42.339697 containerd[1604]: 2024-10-09 00:49:42.314 [INFO][4509] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" HandleID="k8s-pod-network.4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" Workload="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:49:42.340293 containerd[1604]: 2024-10-09 00:49:42.317 [INFO][4459] k8s.go 386: Populated endpoint ContainerID="4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" Namespace="kube-system" Pod="coredns-76f75df574-n4fv5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--n4fv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--n4fv5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"84dbcea1-dd9f-40be-9404-2879458c14d6", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-n4fv5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a6e1654186", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:49:42.340293 containerd[1604]: 2024-10-09 00:49:42.317 [INFO][4459] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" Namespace="kube-system" Pod="coredns-76f75df574-n4fv5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:49:42.340293 containerd[1604]: 2024-10-09 00:49:42.317 [INFO][4459] dataplane_linux.go 68: Setting the host side veth name to cali4a6e1654186 ContainerID="4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" Namespace="kube-system" Pod="coredns-76f75df574-n4fv5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:49:42.340293 containerd[1604]: 2024-10-09 00:49:42.320 [INFO][4459] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" Namespace="kube-system" Pod="coredns-76f75df574-n4fv5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:49:42.340293 containerd[1604]: 2024-10-09 00:49:42.321 [INFO][4459] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" Namespace="kube-system" Pod="coredns-76f75df574-n4fv5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--n4fv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--n4fv5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"84dbcea1-dd9f-40be-9404-2879458c14d6", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9", Pod:"coredns-76f75df574-n4fv5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a6e1654186", MAC:"de:7d:61:37:76:21", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:49:42.340293 containerd[1604]: 2024-10-09 00:49:42.337 [INFO][4459] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9" Namespace="kube-system" Pod="coredns-76f75df574-n4fv5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:49:42.360913 systemd-networkd[1240]: cali1f1bdbe4c2c: Link UP Oct 9 00:49:42.363668 systemd-networkd[1240]: cali1f1bdbe4c2c: Gained carrier Oct 9 00:49:42.369244 systemd-networkd[1240]: vxlan.calico: Gained IPv6LL Oct 9 00:49:42.377906 containerd[1604]: time="2024-10-09T00:49:42.376150178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:49:42.377906 containerd[1604]: time="2024-10-09T00:49:42.376213031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:49:42.377906 containerd[1604]: time="2024-10-09T00:49:42.376224714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:42.377906 containerd[1604]: time="2024-10-09T00:49:42.376317413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.223 [INFO][4453] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0 calico-kube-controllers-798fcc5cf9- calico-system faba5546-81fe-4ddf-8df9-993b0e47da47 792 0 2024-10-09 00:49:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:798fcc5cf9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-798fcc5cf9-ncvpp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1f1bdbe4c2c [] []}} ContainerID="3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" Namespace="calico-system" Pod="calico-kube-controllers-798fcc5cf9-ncvpp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-" Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.223 [INFO][4453] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" Namespace="calico-system" Pod="calico-kube-controllers-798fcc5cf9-ncvpp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.270 [INFO][4503] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" HandleID="k8s-pod-network.3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" Workload="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.287 [INFO][4503] ipam_plugin.go 270: Auto assigning IP ContainerID="3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" HandleID="k8s-pod-network.3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" Workload="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027cf40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-798fcc5cf9-ncvpp", "timestamp":"2024-10-09 00:49:42.270547743 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.287 [INFO][4503] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.315 [INFO][4503] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.316 [INFO][4503] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.318 [INFO][4503] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" host="localhost" Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.323 [INFO][4503] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.330 [INFO][4503] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.333 [INFO][4503] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.340 [INFO][4503] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.340 [INFO][4503] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" host="localhost" Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.342 [INFO][4503] ipam.go 1685: Creating new handle: k8s-pod-network.3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27 Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.345 [INFO][4503] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" host="localhost" Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.351 [INFO][4503] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" host="localhost" Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.351 [INFO][4503] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" host="localhost" Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.351 [INFO][4503] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:49:42.378251 containerd[1604]: 2024-10-09 00:49:42.351 [INFO][4503] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" HandleID="k8s-pod-network.3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" Workload="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:49:42.378692 containerd[1604]: 2024-10-09 00:49:42.354 [INFO][4453] k8s.go 386: Populated endpoint ContainerID="3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" Namespace="calico-system" Pod="calico-kube-controllers-798fcc5cf9-ncvpp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0", GenerateName:"calico-kube-controllers-798fcc5cf9-", Namespace:"calico-system", SelfLink:"", UID:"faba5546-81fe-4ddf-8df9-993b0e47da47", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798fcc5cf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-798fcc5cf9-ncvpp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f1bdbe4c2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:49:42.378692 containerd[1604]: 2024-10-09 00:49:42.355 [INFO][4453] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" Namespace="calico-system" Pod="calico-kube-controllers-798fcc5cf9-ncvpp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:49:42.378692 containerd[1604]: 2024-10-09 00:49:42.355 [INFO][4453] dataplane_linux.go 68: Setting the host side veth name to cali1f1bdbe4c2c ContainerID="3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" Namespace="calico-system" Pod="calico-kube-controllers-798fcc5cf9-ncvpp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:49:42.378692 containerd[1604]: 2024-10-09 00:49:42.364 [INFO][4453] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" Namespace="calico-system" Pod="calico-kube-controllers-798fcc5cf9-ncvpp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:49:42.378692 containerd[1604]: 2024-10-09 00:49:42.364 [INFO][4453] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" Namespace="calico-system" Pod="calico-kube-controllers-798fcc5cf9-ncvpp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0", GenerateName:"calico-kube-controllers-798fcc5cf9-", Namespace:"calico-system", SelfLink:"", UID:"faba5546-81fe-4ddf-8df9-993b0e47da47", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798fcc5cf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27", Pod:"calico-kube-controllers-798fcc5cf9-ncvpp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f1bdbe4c2c", MAC:"5a:97:7b:34:b7:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:49:42.378692 containerd[1604]: 2024-10-09 00:49:42.373 [INFO][4453] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27" Namespace="calico-system" Pod="calico-kube-controllers-798fcc5cf9-ncvpp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:49:42.402757 systemd-networkd[1240]: cali99ec482902d: Link UP Oct 9 00:49:42.402954 systemd-networkd[1240]: cali99ec482902d: Gained carrier Oct 9 00:49:42.416273 containerd[1604]: time="2024-10-09T00:49:42.415539231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:49:42.416273 containerd[1604]: time="2024-10-09T00:49:42.415600084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:49:42.416273 containerd[1604]: time="2024-10-09T00:49:42.415615967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.232 [INFO][4472] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--rtgth-eth0 coredns-76f75df574- kube-system 33d01ef4-50fd-4adb-84dd-990d0fff876a 791 0 2024-10-09 00:49:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-rtgth eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali99ec482902d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" Namespace="kube-system" Pod="coredns-76f75df574-rtgth" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rtgth-" Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.233 [INFO][4472] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" Namespace="kube-system" Pod="coredns-76f75df574-rtgth" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.268 [INFO][4508] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" HandleID="k8s-pod-network.a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" Workload="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.288 [INFO][4508] ipam_plugin.go 270: Auto assigning IP ContainerID="a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" HandleID="k8s-pod-network.a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" Workload="localhost-k8s-coredns--76f75df574--rtgth-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033a0d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-rtgth", "timestamp":"2024-10-09 00:49:42.268230976 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.288 [INFO][4508] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.351 [INFO][4508] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.351 [INFO][4508] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.354 [INFO][4508] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" host="localhost" Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.361 [INFO][4508] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.369 [INFO][4508] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.371 [INFO][4508] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.376 [INFO][4508] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.376 [INFO][4508] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" host="localhost" Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.378 [INFO][4508] ipam.go 1685: Creating new handle: k8s-pod-network.a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.388 [INFO][4508] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" host="localhost" Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.394 [INFO][4508] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" host="localhost" Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.394 [INFO][4508] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" host="localhost" Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.394 [INFO][4508] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:49:42.420163 containerd[1604]: 2024-10-09 00:49:42.394 [INFO][4508] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" HandleID="k8s-pod-network.a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" Workload="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:49:42.420716 containerd[1604]: 2024-10-09 00:49:42.398 [INFO][4472] k8s.go 386: Populated endpoint ContainerID="a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" Namespace="kube-system" Pod="coredns-76f75df574-rtgth" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rtgth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rtgth-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"33d01ef4-50fd-4adb-84dd-990d0fff876a", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-rtgth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99ec482902d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:49:42.420716 containerd[1604]: 2024-10-09 00:49:42.399 [INFO][4472] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" Namespace="kube-system" Pod="coredns-76f75df574-rtgth" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:49:42.420716 containerd[1604]: 2024-10-09 00:49:42.399 [INFO][4472] dataplane_linux.go 68: Setting the host side veth name to cali99ec482902d ContainerID="a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" Namespace="kube-system" Pod="coredns-76f75df574-rtgth" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:49:42.420716 containerd[1604]: 2024-10-09 00:49:42.402 [INFO][4472] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" Namespace="kube-system" Pod="coredns-76f75df574-rtgth" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:49:42.420716 containerd[1604]: 2024-10-09 00:49:42.403 [INFO][4472] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" Namespace="kube-system" Pod="coredns-76f75df574-rtgth" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rtgth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rtgth-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"33d01ef4-50fd-4adb-84dd-990d0fff876a", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd", Pod:"coredns-76f75df574-rtgth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99ec482902d", MAC:"9a:8f:60:be:18:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:49:42.420716 containerd[1604]: 2024-10-09 00:49:42.412 [INFO][4472] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd" Namespace="kube-system" Pod="coredns-76f75df574-rtgth" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:49:42.420716 containerd[1604]: time="2024-10-09T00:49:42.420490154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:42.427015 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:49:42.446721 systemd-networkd[1240]: cali23fe3d43dab: Link UP Oct 9 00:49:42.448034 systemd-networkd[1240]: cali23fe3d43dab: Gained carrier Oct 9 00:49:42.469094 containerd[1604]: time="2024-10-09T00:49:42.468210521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n4fv5,Uid:84dbcea1-dd9f-40be-9404-2879458c14d6,Namespace:kube-system,Attempt:1,} returns sandbox id \"4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9\"" Oct 9 00:49:42.470039 kubelet[2818]: E1009 00:49:42.469921 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:42.472703 containerd[1604]: time="2024-10-09T00:49:42.472473499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:49:42.472703 containerd[1604]: time="2024-10-09T00:49:42.472616289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:49:42.472703 containerd[1604]: time="2024-10-09T00:49:42.472628411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:42.472842 containerd[1604]: time="2024-10-09T00:49:42.472795447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.247 [INFO][4486] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pjbzb-eth0 csi-node-driver- calico-system a439271e-ca34-412b-84ad-b23f24ed45b0 793 0 2024-10-09 00:49:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-pjbzb eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali23fe3d43dab [] []}} ContainerID="26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" Namespace="calico-system" Pod="csi-node-driver-pjbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjbzb-" Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.247 [INFO][4486] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" Namespace="calico-system" Pod="csi-node-driver-pjbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.284 [INFO][4523] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" HandleID="k8s-pod-network.26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" Workload="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.297 [INFO][4523] ipam_plugin.go 270: Auto assigning IP ContainerID="26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" HandleID="k8s-pod-network.26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" Workload="localhost-k8s-csi--node--driver--pjbzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039eb00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pjbzb", "timestamp":"2024-10-09 00:49:42.284643551 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.297 [INFO][4523] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.394 [INFO][4523] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.394 [INFO][4523] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.396 [INFO][4523] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" host="localhost" Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.401 [INFO][4523] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.418 [INFO][4523] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.420 [INFO][4523] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.424 [INFO][4523] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.424 [INFO][4523] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" host="localhost" Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.426 [INFO][4523] ipam.go 1685: Creating new handle: k8s-pod-network.26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.430 [INFO][4523] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" host="localhost" Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.439 [INFO][4523] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" host="localhost" Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.439 [INFO][4523] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" host="localhost" Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.439 [INFO][4523] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:49:42.475425 containerd[1604]: 2024-10-09 00:49:42.439 [INFO][4523] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" HandleID="k8s-pod-network.26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" Workload="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:49:42.475887 containerd[1604]: 2024-10-09 00:49:42.443 [INFO][4486] k8s.go 386: Populated endpoint ContainerID="26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" Namespace="calico-system" Pod="csi-node-driver-pjbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjbzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pjbzb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a439271e-ca34-412b-84ad-b23f24ed45b0", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pjbzb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali23fe3d43dab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:49:42.475887 containerd[1604]: 2024-10-09 00:49:42.444 [INFO][4486] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" Namespace="calico-system" Pod="csi-node-driver-pjbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:49:42.475887 containerd[1604]: 2024-10-09 00:49:42.444 [INFO][4486] dataplane_linux.go 68: Setting the host side veth name to cali23fe3d43dab ContainerID="26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" Namespace="calico-system" Pod="csi-node-driver-pjbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:49:42.475887 containerd[1604]: 2024-10-09 00:49:42.454 [INFO][4486] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" Namespace="calico-system" Pod="csi-node-driver-pjbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:49:42.475887 containerd[1604]: 2024-10-09 00:49:42.454 [INFO][4486] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" Namespace="calico-system" Pod="csi-node-driver-pjbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjbzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pjbzb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a439271e-ca34-412b-84ad-b23f24ed45b0", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a", Pod:"csi-node-driver-pjbzb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali23fe3d43dab", MAC:"0a:3f:ba:7a:4a:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:49:42.475887 containerd[1604]: 2024-10-09 00:49:42.465 [INFO][4486] k8s.go 500: Wrote updated endpoint to datastore ContainerID="26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a" Namespace="calico-system" Pod="csi-node-driver-pjbzb" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:49:42.478610 containerd[1604]: time="2024-10-09T00:49:42.478159936Z" level=info msg="CreateContainer within sandbox \"4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 00:49:42.484067 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:49:42.506463 containerd[1604]: time="2024-10-09T00:49:42.506407204Z" level=info msg="CreateContainer within sandbox \"4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"310636e689e32d20870b6f43c6d456ca0696b0928f2e2ec25815da5ff474118e\"" Oct 9 00:49:42.509339 containerd[1604]: time="2024-10-09T00:49:42.509074925Z" level=info msg="StartContainer for \"310636e689e32d20870b6f43c6d456ca0696b0928f2e2ec25815da5ff474118e\"" Oct 9 00:49:42.511708 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:49:42.521687 containerd[1604]: time="2024-10-09T00:49:42.521643051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798fcc5cf9-ncvpp,Uid:faba5546-81fe-4ddf-8df9-993b0e47da47,Namespace:calico-system,Attempt:1,} returns sandbox id \"3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27\"" Oct 9 00:49:42.523988 containerd[1604]: time="2024-10-09T00:49:42.523655435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 00:49:42.532832 containerd[1604]: time="2024-10-09T00:49:42.532449407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:49:42.532832 containerd[1604]: time="2024-10-09T00:49:42.532526383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:49:42.532832 containerd[1604]: time="2024-10-09T00:49:42.532542746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:42.532832 containerd[1604]: time="2024-10-09T00:49:42.532658611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:49:42.535625 containerd[1604]: time="2024-10-09T00:49:42.535582506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rtgth,Uid:33d01ef4-50fd-4adb-84dd-990d0fff876a,Namespace:kube-system,Attempt:1,} returns sandbox id \"a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd\"" Oct 9 00:49:42.546600 kubelet[2818]: E1009 00:49:42.546215 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:42.552366 containerd[1604]: time="2024-10-09T00:49:42.551782477Z" level=info msg="CreateContainer within sandbox \"a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 00:49:42.561535 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:49:42.564236 containerd[1604]: time="2024-10-09T00:49:42.564205253Z" level=info msg="CreateContainer within sandbox \"a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"61b87641e996627410853b6e86eb3b9a81bad95ff7d1c72249cd23650cf7157a\"" Oct 9 00:49:42.567377 containerd[1604]: time="2024-10-09T00:49:42.567331391Z" level=info msg="StartContainer for \"61b87641e996627410853b6e86eb3b9a81bad95ff7d1c72249cd23650cf7157a\"" Oct 9 00:49:42.580336 containerd[1604]: time="2024-10-09T00:49:42.580118403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pjbzb,Uid:a439271e-ca34-412b-84ad-b23f24ed45b0,Namespace:calico-system,Attempt:1,} returns sandbox id \"26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a\"" Oct 9 00:49:42.589240 containerd[1604]: time="2024-10-09T00:49:42.585894540Z" level=info msg="StartContainer for \"310636e689e32d20870b6f43c6d456ca0696b0928f2e2ec25815da5ff474118e\" returns successfully" Oct 9 00:49:42.664837 containerd[1604]: time="2024-10-09T00:49:42.664132333Z" level=info msg="StartContainer for \"61b87641e996627410853b6e86eb3b9a81bad95ff7d1c72249cd23650cf7157a\" returns successfully" Oct 9 00:49:42.969168 kubelet[2818]: E1009 00:49:42.968605 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:42.974393 kubelet[2818]: E1009 00:49:42.974096 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:42.980165 kubelet[2818]: I1009 00:49:42.980129 2818 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-n4fv5" podStartSLOduration=25.980087897 podStartE2EDuration="25.980087897s" podCreationTimestamp="2024-10-09 00:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:49:42.978592302 +0000 UTC m=+41.266216697" watchObservedRunningTime="2024-10-09 00:49:42.980087897 +0000 UTC m=+41.267712292" Oct 9 00:49:42.987804 kubelet[2818]: I1009 00:49:42.987762 2818 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rtgth" podStartSLOduration=25.987726905 podStartE2EDuration="25.987726905s" podCreationTimestamp="2024-10-09 00:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:49:42.987474052 +0000 UTC m=+41.275098447" watchObservedRunningTime="2024-10-09 00:49:42.987726905 +0000 UTC m=+41.275351260" Oct 9 00:49:43.141859 systemd[1]: run-netns-cni\x2dc842ee0e\x2d38ec\x2d9ec8\x2d2e17\x2d471860c9b235.mount: Deactivated successfully. Oct 9 00:49:43.151505 systemd[1]: Started sshd@9-10.0.0.71:22-10.0.0.1:49964.service - OpenSSH per-connection server daemon (10.0.0.1:49964). Oct 9 00:49:43.204063 sshd[4841]: Accepted publickey for core from 10.0.0.1 port 49964 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:49:43.204360 sshd[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:49:43.219836 systemd-logind[1577]: New session 10 of user core. Oct 9 00:49:43.234362 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 00:49:43.411792 sshd[4841]: pam_unix(sshd:session): session closed for user core Oct 9 00:49:43.417261 systemd[1]: Started sshd@10-10.0.0.71:22-10.0.0.1:49972.service - OpenSSH per-connection server daemon (10.0.0.1:49972). Oct 9 00:49:43.417625 systemd[1]: sshd@9-10.0.0.71:22-10.0.0.1:49964.service: Deactivated successfully. Oct 9 00:49:43.420283 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Oct 9 00:49:43.421469 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 00:49:43.422218 systemd-logind[1577]: Removed session 10. Oct 9 00:49:43.449623 sshd[4856]: Accepted publickey for core from 10.0.0.1 port 49972 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:49:43.450757 sshd[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:49:43.454424 systemd-logind[1577]: New session 11 of user core. Oct 9 00:49:43.458258 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 00:49:43.648346 sshd[4856]: pam_unix(sshd:session): session closed for user core Oct 9 00:49:43.654861 systemd[1]: sshd@10-10.0.0.71:22-10.0.0.1:49972.service: Deactivated successfully. Oct 9 00:49:43.656873 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 00:49:43.664739 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Oct 9 00:49:43.670977 systemd[1]: Started sshd@11-10.0.0.71:22-10.0.0.1:49986.service - OpenSSH per-connection server daemon (10.0.0.1:49986). Oct 9 00:49:43.675128 systemd-logind[1577]: Removed session 11. Oct 9 00:49:43.716680 sshd[4872]: Accepted publickey for core from 10.0.0.1 port 49986 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:49:43.717833 sshd[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:49:43.721849 systemd-logind[1577]: New session 12 of user core. Oct 9 00:49:43.733306 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 00:49:43.841253 systemd-networkd[1240]: cali1f1bdbe4c2c: Gained IPv6LL Oct 9 00:49:43.912738 sshd[4872]: pam_unix(sshd:session): session closed for user core Oct 9 00:49:43.916740 systemd[1]: sshd@11-10.0.0.71:22-10.0.0.1:49986.service: Deactivated successfully. Oct 9 00:49:43.920097 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Oct 9 00:49:43.920602 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 00:49:43.921769 systemd-logind[1577]: Removed session 12. Oct 9 00:49:43.969547 systemd-networkd[1240]: cali99ec482902d: Gained IPv6LL Oct 9 00:49:43.978358 kubelet[2818]: E1009 00:49:43.978323 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:43.978358 kubelet[2818]: E1009 00:49:43.978341 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:44.161214 systemd-networkd[1240]: cali4a6e1654186: Gained IPv6LL Oct 9 00:49:44.218496 containerd[1604]: time="2024-10-09T00:49:44.218390596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:44.219383 containerd[1604]: time="2024-10-09T00:49:44.219209680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 9 00:49:44.220121 containerd[1604]: time="2024-10-09T00:49:44.220090337Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:44.222473 containerd[1604]: time="2024-10-09T00:49:44.222283816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:44.223008 containerd[1604]: time="2024-10-09T00:49:44.222894258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 1.699200495s" Oct 9 00:49:44.223008 containerd[1604]: time="2024-10-09T00:49:44.222927665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 9 00:49:44.224571 containerd[1604]: time="2024-10-09T00:49:44.223682976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 00:49:44.230216 containerd[1604]: time="2024-10-09T00:49:44.230108222Z" level=info msg="CreateContainer within sandbox \"3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 00:49:44.252096 containerd[1604]: time="2024-10-09T00:49:44.252035371Z" level=info msg="CreateContainer within sandbox \"3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"791f1b14c0f349611bf548cc8fc4060c7ac2014eb15676684d499b12d3acb234\"" Oct 9 00:49:44.253813 containerd[1604]: time="2024-10-09T00:49:44.252790962Z" level=info msg="StartContainer for \"791f1b14c0f349611bf548cc8fc4060c7ac2014eb15676684d499b12d3acb234\"" Oct 9 00:49:44.333856 containerd[1604]: time="2024-10-09T00:49:44.333809458Z" level=info msg="StartContainer for \"791f1b14c0f349611bf548cc8fc4060c7ac2014eb15676684d499b12d3acb234\" returns successfully" Oct 9 00:49:44.418280 systemd-networkd[1240]: cali23fe3d43dab: Gained IPv6LL Oct 9 00:49:44.986918 kubelet[2818]: E1009 00:49:44.986884 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:44.988265 kubelet[2818]: E1009 00:49:44.987590 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:49:45.087791 containerd[1604]: time="2024-10-09T00:49:45.087106308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:45.087791 containerd[1604]: time="2024-10-09T00:49:45.087539872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 9 00:49:45.088406 containerd[1604]: time="2024-10-09T00:49:45.088381277Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:45.090310 containerd[1604]: time="2024-10-09T00:49:45.090284969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:45.090873 containerd[1604]: time="2024-10-09T00:49:45.090836037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 867.121894ms" Oct 9 00:49:45.090873 containerd[1604]: time="2024-10-09T00:49:45.090870883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 9 00:49:45.093150 containerd[1604]: time="2024-10-09T00:49:45.093125964Z" level=info msg="CreateContainer within sandbox \"26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 00:49:45.106257 containerd[1604]: time="2024-10-09T00:49:45.106225004Z" level=info msg="CreateContainer within sandbox \"26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"72a0d6baa5c28baff31d5d79b17636ebddb16809dd850be6df1ecaa7b1321b5f\"" Oct 9 00:49:45.107371 containerd[1604]: time="2024-10-09T00:49:45.107305375Z" level=info msg="StartContainer for \"72a0d6baa5c28baff31d5d79b17636ebddb16809dd850be6df1ecaa7b1321b5f\"" Oct 9 00:49:45.171661 containerd[1604]: time="2024-10-09T00:49:45.171606822Z" level=info msg="StartContainer for \"72a0d6baa5c28baff31d5d79b17636ebddb16809dd850be6df1ecaa7b1321b5f\" returns successfully" Oct 9 00:49:45.174290 containerd[1604]: time="2024-10-09T00:49:45.174162962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 00:49:46.056808 kubelet[2818]: I1009 00:49:46.056755 2818 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-798fcc5cf9-ncvpp" podStartSLOduration=21.356367147 podStartE2EDuration="23.05671348s" podCreationTimestamp="2024-10-09 00:49:23 +0000 UTC" firstStartedPulling="2024-10-09 00:49:42.522881592 +0000 UTC m=+40.810505987" lastFinishedPulling="2024-10-09 00:49:44.223227925 +0000 UTC m=+42.510852320" observedRunningTime="2024-10-09 00:49:45.000748509 +0000 UTC m=+43.288372904" watchObservedRunningTime="2024-10-09 00:49:46.05671348 +0000 UTC m=+44.344337875" Oct 9 00:49:46.092483 containerd[1604]: time="2024-10-09T00:49:46.092425502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:46.093133 containerd[1604]: time="2024-10-09T00:49:46.093091429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 9 00:49:46.094323 containerd[1604]: time="2024-10-09T00:49:46.094285457Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:46.096127 containerd[1604]: time="2024-10-09T00:49:46.096098083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:49:46.097572 containerd[1604]: time="2024-10-09T00:49:46.097545440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 923.261254ms" Oct 9 00:49:46.097652 containerd[1604]: time="2024-10-09T00:49:46.097575445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 9 00:49:46.100733 containerd[1604]: time="2024-10-09T00:49:46.100687280Z" level=info msg="CreateContainer within sandbox \"26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 00:49:46.112384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784070437.mount: Deactivated successfully. Oct 9 00:49:46.115338 containerd[1604]: time="2024-10-09T00:49:46.115292750Z" level=info msg="CreateContainer within sandbox \"26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ee690a97b8bbed3634bcc3a4b53758ea5448ae27efbeb8aad15deb54368da112\"" Oct 9 00:49:46.115730 containerd[1604]: time="2024-10-09T00:49:46.115700108Z" level=info msg="StartContainer for \"ee690a97b8bbed3634bcc3a4b53758ea5448ae27efbeb8aad15deb54368da112\"" Oct 9 00:49:46.177705 containerd[1604]: time="2024-10-09T00:49:46.177658743Z" level=info msg="StartContainer for \"ee690a97b8bbed3634bcc3a4b53758ea5448ae27efbeb8aad15deb54368da112\" returns successfully" Oct 9 00:49:46.901713 kubelet[2818]: I1009 00:49:46.901561 2818 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 00:49:46.909636 kubelet[2818]: I1009 00:49:46.909598 2818 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 00:49:48.924522 systemd[1]: Started sshd@12-10.0.0.71:22-10.0.0.1:49998.service - OpenSSH per-connection server daemon (10.0.0.1:49998). Oct 9 00:49:48.961358 sshd[5035]: Accepted publickey for core from 10.0.0.1 port 49998 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:49:48.962849 sshd[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:49:48.966073 systemd-logind[1577]: New session 13 of user core. Oct 9 00:49:48.973376 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 00:49:49.154661 sshd[5035]: pam_unix(sshd:session): session closed for user core Oct 9 00:49:49.164286 systemd[1]: Started sshd@13-10.0.0.71:22-10.0.0.1:50000.service - OpenSSH per-connection server daemon (10.0.0.1:50000). Oct 9 00:49:49.165150 systemd[1]: sshd@12-10.0.0.71:22-10.0.0.1:49998.service: Deactivated successfully. Oct 9 00:49:49.166643 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 00:49:49.167294 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Oct 9 00:49:49.168475 systemd-logind[1577]: Removed session 13. Oct 9 00:49:49.194453 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 50000 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:49:49.195877 sshd[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:49:49.199601 systemd-logind[1577]: New session 14 of user core. Oct 9 00:49:49.213310 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 00:49:49.434720 sshd[5047]: pam_unix(sshd:session): session closed for user core Oct 9 00:49:49.443350 systemd[1]: Started sshd@14-10.0.0.71:22-10.0.0.1:50012.service - OpenSSH per-connection server daemon (10.0.0.1:50012). Oct 9 00:49:49.444120 systemd[1]: sshd@13-10.0.0.71:22-10.0.0.1:50000.service: Deactivated successfully. Oct 9 00:49:49.446141 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 00:49:49.447783 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Oct 9 00:49:49.449271 systemd-logind[1577]: Removed session 14. Oct 9 00:49:49.480369 sshd[5061]: Accepted publickey for core from 10.0.0.1 port 50012 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:49:49.481623 sshd[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:49:49.485341 systemd-logind[1577]: New session 15 of user core. Oct 9 00:49:49.494301 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 00:49:50.836401 sshd[5061]: pam_unix(sshd:session): session closed for user core Oct 9 00:49:50.847352 systemd[1]: Started sshd@15-10.0.0.71:22-10.0.0.1:50028.service - OpenSSH per-connection server daemon (10.0.0.1:50028). Oct 9 00:49:50.850189 systemd[1]: sshd@14-10.0.0.71:22-10.0.0.1:50012.service: Deactivated successfully. Oct 9 00:49:50.854529 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 00:49:50.858426 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Oct 9 00:49:50.861155 systemd-logind[1577]: Removed session 15. Oct 9 00:49:50.889653 sshd[5086]: Accepted publickey for core from 10.0.0.1 port 50028 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:49:50.890899 sshd[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:49:50.894653 systemd-logind[1577]: New session 16 of user core. Oct 9 00:49:50.904301 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 00:49:51.189822 sshd[5086]: pam_unix(sshd:session): session closed for user core Oct 9 00:49:51.199745 systemd[1]: Started sshd@16-10.0.0.71:22-10.0.0.1:50036.service - OpenSSH per-connection server daemon (10.0.0.1:50036). Oct 9 00:49:51.200768 systemd[1]: sshd@15-10.0.0.71:22-10.0.0.1:50028.service: Deactivated successfully. Oct 9 00:49:51.205717 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 00:49:51.207264 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Oct 9 00:49:51.208335 systemd-logind[1577]: Removed session 16. Oct 9 00:49:51.230429 sshd[5101]: Accepted publickey for core from 10.0.0.1 port 50036 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:49:51.231633 sshd[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:49:51.239172 systemd-logind[1577]: New session 17 of user core. Oct 9 00:49:51.246389 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 00:49:51.374595 sshd[5101]: pam_unix(sshd:session): session closed for user core Oct 9 00:49:51.377429 systemd[1]: sshd@16-10.0.0.71:22-10.0.0.1:50036.service: Deactivated successfully. Oct 9 00:49:51.380017 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Oct 9 00:49:51.380185 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 00:49:51.382614 systemd-logind[1577]: Removed session 17. Oct 9 00:49:56.399840 systemd[1]: Started sshd@17-10.0.0.71:22-10.0.0.1:49300.service - OpenSSH per-connection server daemon (10.0.0.1:49300). Oct 9 00:49:56.434411 sshd[5130]: Accepted publickey for core from 10.0.0.1 port 49300 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:49:56.435774 sshd[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:49:56.440166 systemd-logind[1577]: New session 18 of user core. Oct 9 00:49:56.451501 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 00:49:56.629797 sshd[5130]: pam_unix(sshd:session): session closed for user core Oct 9 00:49:56.632809 systemd[1]: sshd@17-10.0.0.71:22-10.0.0.1:49300.service: Deactivated successfully. Oct 9 00:49:56.636022 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Oct 9 00:49:56.636199 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 00:49:56.637565 systemd-logind[1577]: Removed session 18. Oct 9 00:50:01.644274 systemd[1]: Started sshd@18-10.0.0.71:22-10.0.0.1:49306.service - OpenSSH per-connection server daemon (10.0.0.1:49306). Oct 9 00:50:01.679658 sshd[5151]: Accepted publickey for core from 10.0.0.1 port 49306 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:50:01.680897 sshd[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:50:01.685135 systemd-logind[1577]: New session 19 of user core. Oct 9 00:50:01.691276 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 00:50:01.794893 containerd[1604]: time="2024-10-09T00:50:01.794858903Z" level=info msg="StopPodSandbox for \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\"" Oct 9 00:50:01.836259 sshd[5151]: pam_unix(sshd:session): session closed for user core Oct 9 00:50:01.840212 systemd[1]: sshd@18-10.0.0.71:22-10.0.0.1:49306.service: Deactivated successfully. Oct 9 00:50:01.843693 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Oct 9 00:50:01.843859 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 00:50:01.845159 systemd-logind[1577]: Removed session 19. Oct 9 00:50:01.882348 containerd[1604]: 2024-10-09 00:50:01.842 [WARNING][5178] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rtgth-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"33d01ef4-50fd-4adb-84dd-990d0fff876a", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd", Pod:"coredns-76f75df574-rtgth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99ec482902d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:50:01.882348 containerd[1604]: 2024-10-09 00:50:01.843 [INFO][5178] k8s.go 608: Cleaning up netns ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Oct 9 00:50:01.882348 containerd[1604]: 2024-10-09 00:50:01.843 [INFO][5178] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" iface="eth0" netns="" Oct 9 00:50:01.882348 containerd[1604]: 2024-10-09 00:50:01.843 [INFO][5178] k8s.go 615: Releasing IP address(es) ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Oct 9 00:50:01.882348 containerd[1604]: 2024-10-09 00:50:01.843 [INFO][5178] utils.go 188: Calico CNI releasing IP address ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Oct 9 00:50:01.882348 containerd[1604]: 2024-10-09 00:50:01.869 [INFO][5190] ipam_plugin.go 417: Releasing address using handleID ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" HandleID="k8s-pod-network.86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Workload="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:50:01.882348 containerd[1604]: 2024-10-09 00:50:01.869 [INFO][5190] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:50:01.882348 containerd[1604]: 2024-10-09 00:50:01.869 [INFO][5190] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:50:01.882348 containerd[1604]: 2024-10-09 00:50:01.877 [WARNING][5190] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" HandleID="k8s-pod-network.86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Workload="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:50:01.882348 containerd[1604]: 2024-10-09 00:50:01.877 [INFO][5190] ipam_plugin.go 445: Releasing address using workloadID ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" HandleID="k8s-pod-network.86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Workload="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:50:01.882348 containerd[1604]: 2024-10-09 00:50:01.879 [INFO][5190] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:50:01.882348 containerd[1604]: 2024-10-09 00:50:01.880 [INFO][5178] k8s.go 621: Teardown processing complete. ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Oct 9 00:50:01.882731 containerd[1604]: time="2024-10-09T00:50:01.882373140Z" level=info msg="TearDown network for sandbox \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\" successfully" Oct 9 00:50:01.882731 containerd[1604]: time="2024-10-09T00:50:01.882554607Z" level=info msg="StopPodSandbox for \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\" returns successfully" Oct 9 00:50:01.883405 containerd[1604]: time="2024-10-09T00:50:01.883368289Z" level=info msg="RemovePodSandbox for \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\"" Oct 9 00:50:01.885606 containerd[1604]: time="2024-10-09T00:50:01.885566339Z" level=info msg="Forcibly stopping sandbox \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\"" Oct 9 00:50:01.949637 containerd[1604]: 2024-10-09 00:50:01.918 [WARNING][5213] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--rtgth-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"33d01ef4-50fd-4adb-84dd-990d0fff876a", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a45ff3b8cc446d2a6bff3833e64c01631d5d8a4c47a294e3e9c41544759cc8fd", Pod:"coredns-76f75df574-rtgth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali99ec482902d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:50:01.949637 containerd[1604]: 2024-10-09 00:50:01.918 [INFO][5213] k8s.go 608: Cleaning up netns ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Oct 9 00:50:01.949637 containerd[1604]: 2024-10-09 00:50:01.918 [INFO][5213] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" iface="eth0" netns="" Oct 9 00:50:01.949637 containerd[1604]: 2024-10-09 00:50:01.918 [INFO][5213] k8s.go 615: Releasing IP address(es) ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Oct 9 00:50:01.949637 containerd[1604]: 2024-10-09 00:50:01.918 [INFO][5213] utils.go 188: Calico CNI releasing IP address ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Oct 9 00:50:01.949637 containerd[1604]: 2024-10-09 00:50:01.937 [INFO][5220] ipam_plugin.go 417: Releasing address using handleID ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" HandleID="k8s-pod-network.86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Workload="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:50:01.949637 containerd[1604]: 2024-10-09 00:50:01.937 [INFO][5220] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:50:01.949637 containerd[1604]: 2024-10-09 00:50:01.937 [INFO][5220] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:50:01.949637 containerd[1604]: 2024-10-09 00:50:01.945 [WARNING][5220] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" HandleID="k8s-pod-network.86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Workload="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:50:01.949637 containerd[1604]: 2024-10-09 00:50:01.945 [INFO][5220] ipam_plugin.go 445: Releasing address using workloadID ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" HandleID="k8s-pod-network.86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Workload="localhost-k8s-coredns--76f75df574--rtgth-eth0" Oct 9 00:50:01.949637 containerd[1604]: 2024-10-09 00:50:01.946 [INFO][5220] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:50:01.949637 containerd[1604]: 2024-10-09 00:50:01.947 [INFO][5213] k8s.go 621: Teardown processing complete. ContainerID="86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d" Oct 9 00:50:01.949637 containerd[1604]: time="2024-10-09T00:50:01.949613979Z" level=info msg="TearDown network for sandbox \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\" successfully" Oct 9 00:50:01.952300 containerd[1604]: time="2024-10-09T00:50:01.952266976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 00:50:01.952381 containerd[1604]: time="2024-10-09T00:50:01.952322865Z" level=info msg="RemovePodSandbox \"86db90b4dda442797641ec3aa09ead35f373a28336abbb2d1d5344431056b16d\" returns successfully" Oct 9 00:50:01.952909 containerd[1604]: time="2024-10-09T00:50:01.952878308Z" level=info msg="StopPodSandbox for \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\"" Oct 9 00:50:02.018302 containerd[1604]: 2024-10-09 00:50:01.984 [WARNING][5243] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0", GenerateName:"calico-kube-controllers-798fcc5cf9-", Namespace:"calico-system", SelfLink:"", UID:"faba5546-81fe-4ddf-8df9-993b0e47da47", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798fcc5cf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27", Pod:"calico-kube-controllers-798fcc5cf9-ncvpp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f1bdbe4c2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:50:02.018302 containerd[1604]: 2024-10-09 00:50:01.985 [INFO][5243] k8s.go 608: Cleaning up netns ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Oct 9 00:50:02.018302 containerd[1604]: 2024-10-09 00:50:01.985 [INFO][5243] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" iface="eth0" netns="" Oct 9 00:50:02.018302 containerd[1604]: 2024-10-09 00:50:01.985 [INFO][5243] k8s.go 615: Releasing IP address(es) ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Oct 9 00:50:02.018302 containerd[1604]: 2024-10-09 00:50:01.985 [INFO][5243] utils.go 188: Calico CNI releasing IP address ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Oct 9 00:50:02.018302 containerd[1604]: 2024-10-09 00:50:02.005 [INFO][5252] ipam_plugin.go 417: Releasing address using handleID ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" HandleID="k8s-pod-network.4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Workload="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:50:02.018302 containerd[1604]: 2024-10-09 00:50:02.005 [INFO][5252] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:50:02.018302 containerd[1604]: 2024-10-09 00:50:02.005 [INFO][5252] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:50:02.018302 containerd[1604]: 2024-10-09 00:50:02.013 [WARNING][5252] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" HandleID="k8s-pod-network.4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Workload="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:50:02.018302 containerd[1604]: 2024-10-09 00:50:02.013 [INFO][5252] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" HandleID="k8s-pod-network.4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Workload="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:50:02.018302 containerd[1604]: 2024-10-09 00:50:02.014 [INFO][5252] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:50:02.018302 containerd[1604]: 2024-10-09 00:50:02.016 [INFO][5243] k8s.go 621: Teardown processing complete. ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Oct 9 00:50:02.018836 containerd[1604]: time="2024-10-09T00:50:02.018336972Z" level=info msg="TearDown network for sandbox \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\" successfully" Oct 9 00:50:02.018836 containerd[1604]: time="2024-10-09T00:50:02.018359775Z" level=info msg="StopPodSandbox for \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\" returns successfully" Oct 9 00:50:02.019350 containerd[1604]: time="2024-10-09T00:50:02.019030675Z" level=info msg="RemovePodSandbox for \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\"" Oct 9 00:50:02.019350 containerd[1604]: time="2024-10-09T00:50:02.019096685Z" level=info msg="Forcibly stopping sandbox \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\"" Oct 9 00:50:02.094652 containerd[1604]: 2024-10-09 00:50:02.061 [WARNING][5274] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0", GenerateName:"calico-kube-controllers-798fcc5cf9-", Namespace:"calico-system", SelfLink:"", UID:"faba5546-81fe-4ddf-8df9-993b0e47da47", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798fcc5cf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cbbd4b196423f65a5d1c1464295babac6328dbc770fdb1e4b20f9582ddd2f27", Pod:"calico-kube-controllers-798fcc5cf9-ncvpp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1f1bdbe4c2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:50:02.094652 containerd[1604]: 2024-10-09 00:50:02.061 [INFO][5274] k8s.go 608: Cleaning up netns ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Oct 9 00:50:02.094652 containerd[1604]: 2024-10-09 00:50:02.061 [INFO][5274] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" iface="eth0" netns="" Oct 9 00:50:02.094652 containerd[1604]: 2024-10-09 00:50:02.062 [INFO][5274] k8s.go 615: Releasing IP address(es) ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Oct 9 00:50:02.094652 containerd[1604]: 2024-10-09 00:50:02.062 [INFO][5274] utils.go 188: Calico CNI releasing IP address ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Oct 9 00:50:02.094652 containerd[1604]: 2024-10-09 00:50:02.080 [INFO][5282] ipam_plugin.go 417: Releasing address using handleID ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" HandleID="k8s-pod-network.4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Workload="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:50:02.094652 containerd[1604]: 2024-10-09 00:50:02.080 [INFO][5282] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:50:02.094652 containerd[1604]: 2024-10-09 00:50:02.080 [INFO][5282] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:50:02.094652 containerd[1604]: 2024-10-09 00:50:02.088 [WARNING][5282] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" HandleID="k8s-pod-network.4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Workload="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:50:02.094652 containerd[1604]: 2024-10-09 00:50:02.088 [INFO][5282] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" HandleID="k8s-pod-network.4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Workload="localhost-k8s-calico--kube--controllers--798fcc5cf9--ncvpp-eth0" Oct 9 00:50:02.094652 containerd[1604]: 2024-10-09 00:50:02.090 [INFO][5282] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:50:02.094652 containerd[1604]: 2024-10-09 00:50:02.092 [INFO][5274] k8s.go 621: Teardown processing complete. ContainerID="4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2" Oct 9 00:50:02.095645 containerd[1604]: time="2024-10-09T00:50:02.095098837Z" level=info msg="TearDown network for sandbox \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\" successfully" Oct 9 00:50:02.098756 containerd[1604]: time="2024-10-09T00:50:02.098660045Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 00:50:02.098756 containerd[1604]: time="2024-10-09T00:50:02.098717293Z" level=info msg="RemovePodSandbox \"4200581ec6f2f5f224d8b434658dab8cef999a2176bf54b4f00a6872d60ea2f2\" returns successfully" Oct 9 00:50:02.099201 containerd[1604]: time="2024-10-09T00:50:02.099164760Z" level=info msg="StopPodSandbox for \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\"" Oct 9 00:50:02.164739 containerd[1604]: 2024-10-09 00:50:02.132 [WARNING][5306] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--n4fv5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"84dbcea1-dd9f-40be-9404-2879458c14d6", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9", Pod:"coredns-76f75df574-n4fv5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a6e1654186", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:50:02.164739 containerd[1604]: 2024-10-09 00:50:02.132 [INFO][5306] k8s.go 608: Cleaning up netns ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Oct 9 00:50:02.164739 containerd[1604]: 2024-10-09 00:50:02.132 [INFO][5306] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" iface="eth0" netns="" Oct 9 00:50:02.164739 containerd[1604]: 2024-10-09 00:50:02.132 [INFO][5306] k8s.go 615: Releasing IP address(es) ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Oct 9 00:50:02.164739 containerd[1604]: 2024-10-09 00:50:02.132 [INFO][5306] utils.go 188: Calico CNI releasing IP address ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Oct 9 00:50:02.164739 containerd[1604]: 2024-10-09 00:50:02.150 [INFO][5314] ipam_plugin.go 417: Releasing address using handleID ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" HandleID="k8s-pod-network.2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Workload="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:50:02.164739 containerd[1604]: 2024-10-09 00:50:02.150 [INFO][5314] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:50:02.164739 containerd[1604]: 2024-10-09 00:50:02.150 [INFO][5314] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:50:02.164739 containerd[1604]: 2024-10-09 00:50:02.159 [WARNING][5314] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" HandleID="k8s-pod-network.2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Workload="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:50:02.164739 containerd[1604]: 2024-10-09 00:50:02.159 [INFO][5314] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" HandleID="k8s-pod-network.2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Workload="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:50:02.164739 containerd[1604]: 2024-10-09 00:50:02.161 [INFO][5314] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:50:02.164739 containerd[1604]: 2024-10-09 00:50:02.162 [INFO][5306] k8s.go 621: Teardown processing complete. ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Oct 9 00:50:02.165188 containerd[1604]: time="2024-10-09T00:50:02.164766769Z" level=info msg="TearDown network for sandbox \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\" successfully" Oct 9 00:50:02.165188 containerd[1604]: time="2024-10-09T00:50:02.164789413Z" level=info msg="StopPodSandbox for \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\" returns successfully" Oct 9 00:50:02.165286 containerd[1604]: time="2024-10-09T00:50:02.165225317Z" level=info msg="RemovePodSandbox for \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\"" Oct 9 00:50:02.165286 containerd[1604]: time="2024-10-09T00:50:02.165263763Z" level=info msg="Forcibly stopping sandbox \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\"" Oct 9 00:50:02.229937 containerd[1604]: 2024-10-09 00:50:02.197 [WARNING][5337] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--n4fv5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"84dbcea1-dd9f-40be-9404-2879458c14d6", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a4fffbcadec80a2d54be36d41d01d6869695c5d7aa3dfa132e357c337d1d3d9", Pod:"coredns-76f75df574-n4fv5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a6e1654186", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:50:02.229937 containerd[1604]: 2024-10-09 00:50:02.197 [INFO][5337] k8s.go 608: Cleaning up netns ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Oct 9 00:50:02.229937 containerd[1604]: 2024-10-09 00:50:02.197 [INFO][5337] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" iface="eth0" netns="" Oct 9 00:50:02.229937 containerd[1604]: 2024-10-09 00:50:02.197 [INFO][5337] k8s.go 615: Releasing IP address(es) ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Oct 9 00:50:02.229937 containerd[1604]: 2024-10-09 00:50:02.197 [INFO][5337] utils.go 188: Calico CNI releasing IP address ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Oct 9 00:50:02.229937 containerd[1604]: 2024-10-09 00:50:02.217 [INFO][5345] ipam_plugin.go 417: Releasing address using handleID ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" HandleID="k8s-pod-network.2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Workload="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:50:02.229937 containerd[1604]: 2024-10-09 00:50:02.217 [INFO][5345] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:50:02.229937 containerd[1604]: 2024-10-09 00:50:02.217 [INFO][5345] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:50:02.229937 containerd[1604]: 2024-10-09 00:50:02.224 [WARNING][5345] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" HandleID="k8s-pod-network.2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Workload="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:50:02.229937 containerd[1604]: 2024-10-09 00:50:02.224 [INFO][5345] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" HandleID="k8s-pod-network.2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Workload="localhost-k8s-coredns--76f75df574--n4fv5-eth0" Oct 9 00:50:02.229937 containerd[1604]: 2024-10-09 00:50:02.226 [INFO][5345] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:50:02.229937 containerd[1604]: 2024-10-09 00:50:02.228 [INFO][5337] k8s.go 621: Teardown processing complete. ContainerID="2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976" Oct 9 00:50:02.229937 containerd[1604]: time="2024-10-09T00:50:02.229891468Z" level=info msg="TearDown network for sandbox \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\" successfully" Oct 9 00:50:02.232524 containerd[1604]: time="2024-10-09T00:50:02.232486173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 00:50:02.232524 containerd[1604]: time="2024-10-09T00:50:02.232547502Z" level=info msg="RemovePodSandbox \"2777c35a52744ac82a375ddd81f779ee600b2545440fff7fb756cbb9223f8976\" returns successfully" Oct 9 00:50:02.233036 containerd[1604]: time="2024-10-09T00:50:02.233010611Z" level=info msg="StopPodSandbox for \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\"" Oct 9 00:50:02.297416 containerd[1604]: 2024-10-09 00:50:02.265 [WARNING][5372] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pjbzb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a439271e-ca34-412b-84ad-b23f24ed45b0", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a", Pod:"csi-node-driver-pjbzb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali23fe3d43dab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:50:02.297416 containerd[1604]: 2024-10-09 00:50:02.266 [INFO][5372] k8s.go 608: Cleaning up netns ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Oct 9 00:50:02.297416 containerd[1604]: 2024-10-09 00:50:02.266 [INFO][5372] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" iface="eth0" netns="" Oct 9 00:50:02.297416 containerd[1604]: 2024-10-09 00:50:02.266 [INFO][5372] k8s.go 615: Releasing IP address(es) ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Oct 9 00:50:02.297416 containerd[1604]: 2024-10-09 00:50:02.266 [INFO][5372] utils.go 188: Calico CNI releasing IP address ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Oct 9 00:50:02.297416 containerd[1604]: 2024-10-09 00:50:02.284 [INFO][5380] ipam_plugin.go 417: Releasing address using handleID ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" HandleID="k8s-pod-network.9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Workload="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:50:02.297416 containerd[1604]: 2024-10-09 00:50:02.284 [INFO][5380] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:50:02.297416 containerd[1604]: 2024-10-09 00:50:02.284 [INFO][5380] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:50:02.297416 containerd[1604]: 2024-10-09 00:50:02.292 [WARNING][5380] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" HandleID="k8s-pod-network.9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Workload="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:50:02.297416 containerd[1604]: 2024-10-09 00:50:02.292 [INFO][5380] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" HandleID="k8s-pod-network.9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Workload="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:50:02.297416 containerd[1604]: 2024-10-09 00:50:02.293 [INFO][5380] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:50:02.297416 containerd[1604]: 2024-10-09 00:50:02.295 [INFO][5372] k8s.go 621: Teardown processing complete. ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Oct 9 00:50:02.297973 containerd[1604]: time="2024-10-09T00:50:02.297452888Z" level=info msg="TearDown network for sandbox \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\" successfully" Oct 9 00:50:02.297973 containerd[1604]: time="2024-10-09T00:50:02.297473811Z" level=info msg="StopPodSandbox for \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\" returns successfully" Oct 9 00:50:02.297973 containerd[1604]: time="2024-10-09T00:50:02.297916477Z" level=info msg="RemovePodSandbox for \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\"" Oct 9 00:50:02.297973 containerd[1604]: time="2024-10-09T00:50:02.297943321Z" level=info msg="Forcibly stopping sandbox \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\"" Oct 9 00:50:02.364180 containerd[1604]: 2024-10-09 00:50:02.330 [WARNING][5403] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pjbzb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a439271e-ca34-412b-84ad-b23f24ed45b0", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 0, 49, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26815e59e50cb4c3ec5c9436906e952329c8055d9e1018dbcabd43982ec8ac4a", Pod:"csi-node-driver-pjbzb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali23fe3d43dab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 00:50:02.364180 containerd[1604]: 2024-10-09 00:50:02.330 [INFO][5403] k8s.go 608: Cleaning up netns ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Oct 9 00:50:02.364180 containerd[1604]: 2024-10-09 00:50:02.330 [INFO][5403] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" iface="eth0" netns="" Oct 9 00:50:02.364180 containerd[1604]: 2024-10-09 00:50:02.330 [INFO][5403] k8s.go 615: Releasing IP address(es) ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Oct 9 00:50:02.364180 containerd[1604]: 2024-10-09 00:50:02.330 [INFO][5403] utils.go 188: Calico CNI releasing IP address ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Oct 9 00:50:02.364180 containerd[1604]: 2024-10-09 00:50:02.350 [INFO][5411] ipam_plugin.go 417: Releasing address using handleID ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" HandleID="k8s-pod-network.9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Workload="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:50:02.364180 containerd[1604]: 2024-10-09 00:50:02.350 [INFO][5411] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 00:50:02.364180 containerd[1604]: 2024-10-09 00:50:02.350 [INFO][5411] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 00:50:02.364180 containerd[1604]: 2024-10-09 00:50:02.358 [WARNING][5411] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" HandleID="k8s-pod-network.9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Workload="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:50:02.364180 containerd[1604]: 2024-10-09 00:50:02.359 [INFO][5411] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" HandleID="k8s-pod-network.9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Workload="localhost-k8s-csi--node--driver--pjbzb-eth0" Oct 9 00:50:02.364180 containerd[1604]: 2024-10-09 00:50:02.360 [INFO][5411] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 00:50:02.364180 containerd[1604]: 2024-10-09 00:50:02.362 [INFO][5403] k8s.go 621: Teardown processing complete. ContainerID="9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb" Oct 9 00:50:02.364180 containerd[1604]: time="2024-10-09T00:50:02.364190906Z" level=info msg="TearDown network for sandbox \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\" successfully" Oct 9 00:50:02.375614 containerd[1604]: time="2024-10-09T00:50:02.375573595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 00:50:02.375721 containerd[1604]: time="2024-10-09T00:50:02.375630803Z" level=info msg="RemovePodSandbox \"9cb855768eccfd35ed0190e043ee005267ac64dbc3552882157f7e4e890ad6cb\" returns successfully" Oct 9 00:50:06.193085 kubelet[2818]: E1009 00:50:06.192913 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:50:06.206567 kubelet[2818]: I1009 00:50:06.206516 2818 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-pjbzb" podStartSLOduration=39.690128206 podStartE2EDuration="43.206476407s" podCreationTimestamp="2024-10-09 00:49:23 +0000 UTC" firstStartedPulling="2024-10-09 00:49:42.581637443 +0000 UTC m=+40.869261798" lastFinishedPulling="2024-10-09 00:49:46.097985604 +0000 UTC m=+44.385609999" observedRunningTime="2024-10-09 00:49:47.024131338 +0000 UTC m=+45.311755693" watchObservedRunningTime="2024-10-09 00:50:06.206476407 +0000 UTC m=+64.494100802" Oct 9 00:50:06.847308 systemd[1]: Started sshd@19-10.0.0.71:22-10.0.0.1:46180.service - OpenSSH per-connection server daemon (10.0.0.1:46180). Oct 9 00:50:06.880713 sshd[5442]: Accepted publickey for core from 10.0.0.1 port 46180 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:50:06.881900 sshd[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:50:06.885767 systemd-logind[1577]: New session 20 of user core. Oct 9 00:50:06.891335 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 00:50:07.029477 sshd[5442]: pam_unix(sshd:session): session closed for user core Oct 9 00:50:07.033031 systemd[1]: sshd@19-10.0.0.71:22-10.0.0.1:46180.service: Deactivated successfully. Oct 9 00:50:07.035228 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Oct 9 00:50:07.035828 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 00:50:07.036805 systemd-logind[1577]: Removed session 20.