Oct 9 01:06:09.884720 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 9 01:06:09.884756 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Oct 8 23:34:40 -00 2024 Oct 9 01:06:09.884769 kernel: KASLR enabled Oct 9 01:06:09.884775 kernel: efi: EFI v2.7 by EDK II Oct 9 01:06:09.884780 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 9 01:06:09.884786 kernel: random: crng init done Oct 9 01:06:09.884793 kernel: secureboot: Secure boot disabled Oct 9 01:06:09.884799 kernel: ACPI: Early table checksum verification disabled Oct 9 01:06:09.884805 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 9 01:06:09.884813 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 9 01:06:09.884820 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:09.884826 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:09.884832 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:09.884838 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:09.884846 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:09.884854 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:09.884860 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:09.884866 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:09.884872 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:06:09.884878 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 9 01:06:09.884885 kernel: NUMA: Failed to initialise from firmware Oct 9 01:06:09.884891 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 01:06:09.884897 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Oct 9 01:06:09.884903 kernel: Zone ranges: Oct 9 01:06:09.884909 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 01:06:09.884917 kernel: DMA32 empty Oct 9 01:06:09.884923 kernel: Normal empty Oct 9 01:06:09.884929 kernel: Movable zone start for each node Oct 9 01:06:09.884935 kernel: Early memory node ranges Oct 9 01:06:09.884941 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 9 01:06:09.884947 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 9 01:06:09.884954 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 9 01:06:09.884960 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 9 01:06:09.884967 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 9 01:06:09.884973 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 9 01:06:09.884979 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 9 01:06:09.884985 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 01:06:09.884993 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 9 01:06:09.885000 kernel: psci: probing for conduit method from ACPI. Oct 9 01:06:09.885006 kernel: psci: PSCIv1.1 detected in firmware. Oct 9 01:06:09.885015 kernel: psci: Using standard PSCI v0.2 function IDs Oct 9 01:06:09.885022 kernel: psci: Trusted OS migration not required Oct 9 01:06:09.885028 kernel: psci: SMC Calling Convention v1.1 Oct 9 01:06:09.885036 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 9 01:06:09.885043 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 9 01:06:09.885050 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 9 01:06:09.885057 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 9 01:06:09.885063 kernel: Detected PIPT I-cache on CPU0 Oct 9 01:06:09.885070 kernel: CPU features: detected: GIC system register CPU interface Oct 9 01:06:09.885077 kernel: CPU features: detected: Hardware dirty bit management Oct 9 01:06:09.885083 kernel: CPU features: detected: Spectre-v4 Oct 9 01:06:09.885090 kernel: CPU features: detected: Spectre-BHB Oct 9 01:06:09.885097 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 9 01:06:09.885105 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 9 01:06:09.885111 kernel: CPU features: detected: ARM erratum 1418040 Oct 9 01:06:09.885118 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 9 01:06:09.885125 kernel: alternatives: applying boot alternatives Oct 9 01:06:09.885132 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d2d67b5440410ae2d0aa86eba97891969be0a7a421fa55f13442706ef7ed2a5e Oct 9 01:06:09.885139 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 01:06:09.885146 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 01:06:09.885152 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 01:06:09.885159 kernel: Fallback order for Node 0: 0 Oct 9 01:06:09.885165 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 9 01:06:09.885172 kernel: Policy zone: DMA Oct 9 01:06:09.885179 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 01:06:09.885186 kernel: software IO TLB: area num 4. Oct 9 01:06:09.885192 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 9 01:06:09.885199 kernel: Memory: 2386404K/2572288K available (10240K kernel code, 2184K rwdata, 8092K rodata, 39552K init, 897K bss, 185884K reserved, 0K cma-reserved) Oct 9 01:06:09.885206 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 01:06:09.885212 kernel: trace event string verifier disabled Oct 9 01:06:09.885219 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 01:06:09.885226 kernel: rcu: RCU event tracing is enabled. Oct 9 01:06:09.885233 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 01:06:09.885240 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 01:06:09.885246 kernel: Tracing variant of Tasks RCU enabled. Oct 9 01:06:09.885253 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 01:06:09.885261 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 01:06:09.885267 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 9 01:06:09.885274 kernel: GICv3: 256 SPIs implemented Oct 9 01:06:09.885280 kernel: GICv3: 0 Extended SPIs implemented Oct 9 01:06:09.885287 kernel: Root IRQ handler: gic_handle_irq Oct 9 01:06:09.885294 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 9 01:06:09.885301 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 9 01:06:09.885307 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 9 01:06:09.885314 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 9 01:06:09.885321 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 9 01:06:09.885328 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 9 01:06:09.885336 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 9 01:06:09.885343 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 01:06:09.885350 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 01:06:09.885356 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 9 01:06:09.885363 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 9 01:06:09.885370 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 9 01:06:09.885376 kernel: arm-pv: using stolen time PV Oct 9 01:06:09.885383 kernel: Console: colour dummy device 80x25 Oct 9 01:06:09.885390 kernel: ACPI: Core revision 20230628 Oct 9 01:06:09.885397 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 9 01:06:09.885403 kernel: pid_max: default: 32768 minimum: 301 Oct 9 01:06:09.885411 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 01:06:09.885418 kernel: landlock: Up and running. Oct 9 01:06:09.885424 kernel: SELinux: Initializing. Oct 9 01:06:09.885431 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 01:06:09.885438 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 01:06:09.885446 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:06:09.885452 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:06:09.885460 kernel: rcu: Hierarchical SRCU implementation. Oct 9 01:06:09.885472 kernel: rcu: Max phase no-delay instances is 400. Oct 9 01:06:09.885480 kernel: Platform MSI: ITS@0x8080000 domain created Oct 9 01:06:09.885487 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 9 01:06:09.885494 kernel: Remapping and enabling EFI services. Oct 9 01:06:09.885501 kernel: smp: Bringing up secondary CPUs ... Oct 9 01:06:09.885508 kernel: Detected PIPT I-cache on CPU1 Oct 9 01:06:09.885515 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 9 01:06:09.885522 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 9 01:06:09.885528 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 01:06:09.885535 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 9 01:06:09.885544 kernel: Detected PIPT I-cache on CPU2 Oct 9 01:06:09.885551 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 9 01:06:09.885563 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 9 01:06:09.885571 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 01:06:09.885578 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 9 01:06:09.885586 kernel: Detected PIPT I-cache on CPU3 Oct 9 01:06:09.885593 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 9 01:06:09.885600 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 9 01:06:09.885607 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 01:06:09.885615 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 9 01:06:09.885623 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 01:06:09.885630 kernel: SMP: Total of 4 processors activated. Oct 9 01:06:09.885637 kernel: CPU features: detected: 32-bit EL0 Support Oct 9 01:06:09.885644 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 9 01:06:09.885651 kernel: CPU features: detected: Common not Private translations Oct 9 01:06:09.885658 kernel: CPU features: detected: CRC32 instructions Oct 9 01:06:09.885665 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 9 01:06:09.885673 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 9 01:06:09.885680 kernel: CPU features: detected: LSE atomic instructions Oct 9 01:06:09.885687 kernel: CPU features: detected: Privileged Access Never Oct 9 01:06:09.885694 kernel: CPU features: detected: RAS Extension Support Oct 9 01:06:09.885701 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 9 01:06:09.885708 kernel: CPU: All CPU(s) started at EL1 Oct 9 01:06:09.885715 kernel: alternatives: applying system-wide alternatives Oct 9 01:06:09.885722 kernel: devtmpfs: initialized Oct 9 01:06:09.885730 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 01:06:09.885738 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 01:06:09.885757 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 01:06:09.885764 kernel: SMBIOS 3.0.0 present. Oct 9 01:06:09.885772 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 9 01:06:09.885779 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 01:06:09.885787 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 9 01:06:09.885794 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 9 01:06:09.885802 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 9 01:06:09.885809 kernel: audit: initializing netlink subsys (disabled) Oct 9 01:06:09.885818 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Oct 9 01:06:09.885825 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 01:06:09.885833 kernel: cpuidle: using governor menu Oct 9 01:06:09.885840 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 9 01:06:09.885847 kernel: ASID allocator initialised with 32768 entries Oct 9 01:06:09.885854 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 01:06:09.885861 kernel: Serial: AMBA PL011 UART driver Oct 9 01:06:09.885868 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 9 01:06:09.885875 kernel: Modules: 0 pages in range for non-PLT usage Oct 9 01:06:09.885884 kernel: Modules: 508992 pages in range for PLT usage Oct 9 01:06:09.885891 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 01:06:09.885898 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 01:06:09.885905 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 9 01:06:09.885912 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 9 01:06:09.885919 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 01:06:09.885927 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 01:06:09.885934 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 9 01:06:09.885941 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 9 01:06:09.885964 kernel: ACPI: Added _OSI(Module Device) Oct 9 01:06:09.885971 kernel: ACPI: Added _OSI(Processor Device) Oct 9 01:06:09.885978 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 01:06:09.885985 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 01:06:09.885992 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 01:06:09.885999 kernel: ACPI: Interpreter enabled Oct 9 01:06:09.886006 kernel: ACPI: Using GIC for interrupt routing Oct 9 01:06:09.886013 kernel: ACPI: MCFG table detected, 1 entries Oct 9 01:06:09.886020 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 9 01:06:09.886029 kernel: printk: console [ttyAMA0] enabled Oct 9 01:06:09.886036 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 01:06:09.886163 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 01:06:09.886234 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 9 01:06:09.886299 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 9 01:06:09.886361 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 9 01:06:09.886425 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 9 01:06:09.886436 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 9 01:06:09.886444 kernel: PCI host bridge to bus 0000:00 Oct 9 01:06:09.886518 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 9 01:06:09.886578 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 9 01:06:09.886636 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 9 01:06:09.886693 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 01:06:09.886844 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 9 01:06:09.886926 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 01:06:09.886993 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 9 01:06:09.887056 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 9 01:06:09.887122 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 9 01:06:09.887187 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 9 01:06:09.887252 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 9 01:06:09.887318 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 9 01:06:09.887380 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 9 01:06:09.887436 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 9 01:06:09.887503 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 9 01:06:09.887513 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 9 01:06:09.887521 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 9 01:06:09.887528 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 9 01:06:09.887536 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 9 01:06:09.887543 kernel: iommu: Default domain type: Translated Oct 9 01:06:09.887552 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 9 01:06:09.887559 kernel: efivars: Registered efivars operations Oct 9 01:06:09.887567 kernel: vgaarb: loaded Oct 9 01:06:09.887574 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 9 01:06:09.887581 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 01:06:09.887588 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 01:06:09.887595 kernel: pnp: PnP ACPI init Oct 9 01:06:09.887671 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 9 01:06:09.887682 kernel: pnp: PnP ACPI: found 1 devices Oct 9 01:06:09.887690 kernel: NET: Registered PF_INET protocol family Oct 9 01:06:09.887697 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 01:06:09.887705 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 01:06:09.887712 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 01:06:09.887720 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 01:06:09.887727 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 01:06:09.887734 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 01:06:09.887750 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 01:06:09.887759 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 01:06:09.887767 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 01:06:09.887774 kernel: PCI: CLS 0 bytes, default 64 Oct 9 01:06:09.887781 kernel: kvm [1]: HYP mode not available Oct 9 01:06:09.887788 kernel: Initialise system trusted keyrings Oct 9 01:06:09.887796 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 01:06:09.887803 kernel: Key type asymmetric registered Oct 9 01:06:09.887810 kernel: Asymmetric key parser 'x509' registered Oct 9 01:06:09.887817 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 9 01:06:09.887826 kernel: io scheduler mq-deadline registered Oct 9 01:06:09.887833 kernel: io scheduler kyber registered Oct 9 01:06:09.887840 kernel: io scheduler bfq registered Oct 9 01:06:09.887848 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 9 01:06:09.887856 kernel: ACPI: button: Power Button [PWRB] Oct 9 01:06:09.887863 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 9 01:06:09.887930 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 9 01:06:09.887939 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 01:06:09.887947 kernel: thunder_xcv, ver 1.0 Oct 9 01:06:09.887955 kernel: thunder_bgx, ver 1.0 Oct 9 01:06:09.887962 kernel: nicpf, ver 1.0 Oct 9 01:06:09.887969 kernel: nicvf, ver 1.0 Oct 9 01:06:09.888040 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 9 01:06:09.888102 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-09T01:06:09 UTC (1728435969) Oct 9 01:06:09.888111 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 9 01:06:09.888119 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 9 01:06:09.888126 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 9 01:06:09.888134 kernel: watchdog: Hard watchdog permanently disabled Oct 9 01:06:09.888142 kernel: NET: Registered PF_INET6 protocol family Oct 9 01:06:09.888149 kernel: Segment Routing with IPv6 Oct 9 01:06:09.888156 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 01:06:09.888163 kernel: NET: Registered PF_PACKET protocol family Oct 9 01:06:09.888170 kernel: Key type dns_resolver registered Oct 9 01:06:09.888177 kernel: registered taskstats version 1 Oct 9 01:06:09.888185 kernel: Loading compiled-in X.509 certificates Oct 9 01:06:09.888192 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 80611b0a9480eaf6d787b908c6349fdb5d07fa81' Oct 9 01:06:09.888201 kernel: Key type .fscrypt registered Oct 9 01:06:09.888208 kernel: Key type fscrypt-provisioning registered Oct 9 01:06:09.888215 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 01:06:09.888222 kernel: ima: Allocated hash algorithm: sha1 Oct 9 01:06:09.888229 kernel: ima: No architecture policies found Oct 9 01:06:09.888236 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 9 01:06:09.888243 kernel: clk: Disabling unused clocks Oct 9 01:06:09.888251 kernel: Freeing unused kernel memory: 39552K Oct 9 01:06:09.888258 kernel: Run /init as init process Oct 9 01:06:09.888266 kernel: with arguments: Oct 9 01:06:09.888273 kernel: /init Oct 9 01:06:09.888280 kernel: with environment: Oct 9 01:06:09.888287 kernel: HOME=/ Oct 9 01:06:09.888294 kernel: TERM=linux Oct 9 01:06:09.888302 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 01:06:09.888310 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:06:09.888320 systemd[1]: Detected virtualization kvm. Oct 9 01:06:09.888329 systemd[1]: Detected architecture arm64. Oct 9 01:06:09.888336 systemd[1]: Running in initrd. Oct 9 01:06:09.888344 systemd[1]: No hostname configured, using default hostname. Oct 9 01:06:09.888351 systemd[1]: Hostname set to . Oct 9 01:06:09.888359 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:06:09.888367 systemd[1]: Queued start job for default target initrd.target. Oct 9 01:06:09.888375 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:06:09.888383 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:06:09.888392 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 01:06:09.888400 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:06:09.888408 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 01:06:09.888416 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 01:06:09.888425 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 01:06:09.888434 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 01:06:09.888441 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:06:09.888450 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:06:09.888458 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:06:09.888472 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:06:09.888480 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:06:09.888488 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:06:09.888495 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:06:09.888503 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:06:09.888511 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 01:06:09.888521 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 01:06:09.888529 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:06:09.888540 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:06:09.888548 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:06:09.888556 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:06:09.888564 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 01:06:09.888572 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:06:09.888580 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 01:06:09.888587 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 01:06:09.888597 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:06:09.888607 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:06:09.888615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:06:09.888623 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 01:06:09.888633 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:06:09.888641 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 01:06:09.888654 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:06:09.888662 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:06:09.888670 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:06:09.888696 systemd-journald[238]: Collecting audit messages is disabled. Oct 9 01:06:09.888719 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:06:09.888728 systemd-journald[238]: Journal started Oct 9 01:06:09.888758 systemd-journald[238]: Runtime Journal (/run/log/journal/54be862274cb47d2af516b2e27efce1e) is 5.9M, max 47.3M, 41.4M free. Oct 9 01:06:09.874245 systemd-modules-load[239]: Inserted module 'overlay' Oct 9 01:06:09.891488 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:06:09.891506 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 01:06:09.892782 kernel: Bridge firewalling registered Oct 9 01:06:09.892771 systemd-modules-load[239]: Inserted module 'br_netfilter' Oct 9 01:06:09.894760 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:06:09.895451 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:06:09.899411 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:06:09.901719 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:06:09.902781 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:06:09.910004 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:06:09.911108 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:06:09.912644 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:06:09.922873 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 01:06:09.924681 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:06:09.934018 dracut-cmdline[274]: dracut-dracut-053 Oct 9 01:06:09.936629 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d2d67b5440410ae2d0aa86eba97891969be0a7a421fa55f13442706ef7ed2a5e Oct 9 01:06:09.952993 systemd-resolved[278]: Positive Trust Anchors: Oct 9 01:06:09.953060 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:06:09.953092 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:06:09.957684 systemd-resolved[278]: Defaulting to hostname 'linux'. Oct 9 01:06:09.958937 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:06:09.959792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:06:10.002778 kernel: SCSI subsystem initialized Oct 9 01:06:10.007757 kernel: Loading iSCSI transport class v2.0-870. Oct 9 01:06:10.014768 kernel: iscsi: registered transport (tcp) Oct 9 01:06:10.026771 kernel: iscsi: registered transport (qla4xxx) Oct 9 01:06:10.026784 kernel: QLogic iSCSI HBA Driver Oct 9 01:06:10.068779 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 01:06:10.074893 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 01:06:10.093214 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 01:06:10.093262 kernel: device-mapper: uevent: version 1.0.3 Oct 9 01:06:10.094774 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 01:06:10.138764 kernel: raid6: neonx8 gen() 15777 MB/s Oct 9 01:06:10.155759 kernel: raid6: neonx4 gen() 15667 MB/s Oct 9 01:06:10.172765 kernel: raid6: neonx2 gen() 13264 MB/s Oct 9 01:06:10.189755 kernel: raid6: neonx1 gen() 10458 MB/s Oct 9 01:06:10.206756 kernel: raid6: int64x8 gen() 6962 MB/s Oct 9 01:06:10.223767 kernel: raid6: int64x4 gen() 7353 MB/s Oct 9 01:06:10.240758 kernel: raid6: int64x2 gen() 6130 MB/s Oct 9 01:06:10.257756 kernel: raid6: int64x1 gen() 5053 MB/s Oct 9 01:06:10.257769 kernel: raid6: using algorithm neonx8 gen() 15777 MB/s Oct 9 01:06:10.274763 kernel: raid6: .... xor() 11929 MB/s, rmw enabled Oct 9 01:06:10.274776 kernel: raid6: using neon recovery algorithm Oct 9 01:06:10.279759 kernel: xor: measuring software checksum speed Oct 9 01:06:10.279773 kernel: 8regs : 19783 MB/sec Oct 9 01:06:10.281179 kernel: 32regs : 17407 MB/sec Oct 9 01:06:10.281199 kernel: arm64_neon : 26998 MB/sec Oct 9 01:06:10.281209 kernel: xor: using function: arm64_neon (26998 MB/sec) Oct 9 01:06:10.330765 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 01:06:10.341808 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:06:10.352908 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:06:10.364246 systemd-udevd[460]: Using default interface naming scheme 'v255'. Oct 9 01:06:10.367344 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:06:10.372899 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 01:06:10.385047 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Oct 9 01:06:10.413410 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:06:10.420880 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:06:10.461178 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:06:10.468902 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 01:06:10.479989 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 01:06:10.481217 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:06:10.482564 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:06:10.483450 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:06:10.493936 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 01:06:10.497761 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 9 01:06:10.501324 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 01:06:10.504253 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:06:10.510955 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 01:06:10.510975 kernel: GPT:9289727 != 19775487 Oct 9 01:06:10.510984 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 01:06:10.511002 kernel: GPT:9289727 != 19775487 Oct 9 01:06:10.511169 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:06:10.512725 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 01:06:10.512749 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:06:10.511286 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:06:10.514598 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:06:10.515398 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:06:10.515527 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:06:10.517291 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:06:10.524953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:06:10.534219 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (504) Oct 9 01:06:10.534254 kernel: BTRFS: device fsid c25b3a2f-539f-42a7-8842-97b35e474647 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (516) Oct 9 01:06:10.540337 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:06:10.546020 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 01:06:10.552596 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 01:06:10.556727 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 01:06:10.560113 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 01:06:10.560993 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 01:06:10.573939 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 01:06:10.575363 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:06:10.578860 disk-uuid[550]: Primary Header is updated. Oct 9 01:06:10.578860 disk-uuid[550]: Secondary Entries is updated. Oct 9 01:06:10.578860 disk-uuid[550]: Secondary Header is updated. Oct 9 01:06:10.581758 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:06:10.600085 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:06:11.591760 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:06:11.594167 disk-uuid[551]: The operation has completed successfully. Oct 9 01:06:11.613178 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 01:06:11.613276 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 01:06:11.632925 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 01:06:11.636726 sh[574]: Success Oct 9 01:06:11.651763 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 9 01:06:11.692238 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 01:06:11.693774 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 01:06:11.694538 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 01:06:11.704991 kernel: BTRFS info (device dm-0): first mount of filesystem c25b3a2f-539f-42a7-8842-97b35e474647 Oct 9 01:06:11.705030 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:06:11.705913 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 01:06:11.705929 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 01:06:11.706965 kernel: BTRFS info (device dm-0): using free space tree Oct 9 01:06:11.710496 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 01:06:11.711710 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 01:06:11.712498 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 01:06:11.714383 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 01:06:11.725161 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:06:11.725210 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:06:11.725220 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:06:11.727766 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:06:11.734494 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 01:06:11.736765 kernel: BTRFS info (device vda6): last unmount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:06:11.742443 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 01:06:11.748939 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 01:06:11.805595 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:06:11.813886 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:06:11.832118 systemd-networkd[766]: lo: Link UP Oct 9 01:06:11.832131 systemd-networkd[766]: lo: Gained carrier Oct 9 01:06:11.832921 systemd-networkd[766]: Enumeration completed Oct 9 01:06:11.833369 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:06:11.833372 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:06:11.835440 ignition[673]: Ignition 2.19.0 Oct 9 01:06:11.833838 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:06:11.835447 ignition[673]: Stage: fetch-offline Oct 9 01:06:11.834157 systemd-networkd[766]: eth0: Link UP Oct 9 01:06:11.835489 ignition[673]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:06:11.834160 systemd-networkd[766]: eth0: Gained carrier Oct 9 01:06:11.835496 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:06:11.834166 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:06:11.835645 ignition[673]: parsed url from cmdline: "" Oct 9 01:06:11.834996 systemd[1]: Reached target network.target - Network. Oct 9 01:06:11.835648 ignition[673]: no config URL provided Oct 9 01:06:11.835652 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:06:11.835659 ignition[673]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:06:11.835683 ignition[673]: op(1): [started] loading QEMU firmware config module Oct 9 01:06:11.835688 ignition[673]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 01:06:11.847650 ignition[673]: op(1): [finished] loading QEMU firmware config module Oct 9 01:06:11.848791 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:06:11.886475 ignition[673]: parsing config with SHA512: 3ce9e1365131c8e46ba3d8186a37e802a0b4a2fe5259319330cacea1355b5a72f991cf68957e5740751af246f46cdb88aa6ab43e3264ece2bd18e63e109cd2b2 Oct 9 01:06:11.892428 unknown[673]: fetched base config from "system" Oct 9 01:06:11.892439 unknown[673]: fetched user config from "qemu" Oct 9 01:06:11.892877 ignition[673]: fetch-offline: fetch-offline passed Oct 9 01:06:11.892942 ignition[673]: Ignition finished successfully Oct 9 01:06:11.894344 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:06:11.895775 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 01:06:11.906887 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 01:06:11.916781 ignition[772]: Ignition 2.19.0 Oct 9 01:06:11.916791 ignition[772]: Stage: kargs Oct 9 01:06:11.916945 ignition[772]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:06:11.916954 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:06:11.917785 ignition[772]: kargs: kargs passed Oct 9 01:06:11.917828 ignition[772]: Ignition finished successfully Oct 9 01:06:11.920427 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 01:06:11.926887 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 01:06:11.936637 ignition[781]: Ignition 2.19.0 Oct 9 01:06:11.936653 ignition[781]: Stage: disks Oct 9 01:06:11.936845 ignition[781]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:06:11.936855 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:06:11.937714 ignition[781]: disks: disks passed Oct 9 01:06:11.937826 ignition[781]: Ignition finished successfully Oct 9 01:06:11.940801 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 01:06:11.942428 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 01:06:11.943334 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 01:06:11.944839 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:06:11.946294 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:06:11.947569 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:06:11.955889 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 01:06:11.964916 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 01:06:11.968060 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 01:06:11.976839 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 01:06:12.019759 kernel: EXT4-fs (vda9): mounted filesystem 3a4adf89-ce2b-46a9-8e1a-433a27a27d16 r/w with ordered data mode. Quota mode: none. Oct 9 01:06:12.020479 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 01:06:12.021523 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 01:06:12.029835 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:06:12.031285 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 01:06:12.032324 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 01:06:12.032393 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 01:06:12.032446 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:06:12.038535 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Oct 9 01:06:12.038030 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 01:06:12.039872 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 01:06:12.043374 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:06:12.043393 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:06:12.043402 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:06:12.045765 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:06:12.046723 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:06:12.083800 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 01:06:12.087075 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Oct 9 01:06:12.090723 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 01:06:12.093545 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 01:06:12.162156 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 01:06:12.176883 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 01:06:12.178189 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 01:06:12.182776 kernel: BTRFS info (device vda6): last unmount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:06:12.198042 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 01:06:12.200800 ignition[912]: INFO : Ignition 2.19.0 Oct 9 01:06:12.200800 ignition[912]: INFO : Stage: mount Oct 9 01:06:12.200800 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:06:12.200800 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:06:12.204984 ignition[912]: INFO : mount: mount passed Oct 9 01:06:12.204984 ignition[912]: INFO : Ignition finished successfully Oct 9 01:06:12.203013 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 01:06:12.207839 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 01:06:12.704441 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 01:06:12.719931 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:06:12.726360 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Oct 9 01:06:12.726386 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:06:12.726397 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:06:12.727086 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:06:12.729761 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:06:12.730435 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:06:12.745505 ignition[943]: INFO : Ignition 2.19.0 Oct 9 01:06:12.745505 ignition[943]: INFO : Stage: files Oct 9 01:06:12.746769 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:06:12.746769 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:06:12.746769 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Oct 9 01:06:12.749511 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 01:06:12.749511 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 01:06:12.749511 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 01:06:12.749511 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 01:06:12.753538 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 01:06:12.753538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 9 01:06:12.753538 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 9 01:06:12.749579 unknown[943]: wrote ssh authorized keys file for user: core Oct 9 01:06:12.808656 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 01:06:12.903660 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 9 01:06:12.905126 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 01:06:12.905126 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 01:06:12.905126 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:06:12.905126 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:06:12.905126 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:06:12.905126 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:06:12.905126 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:06:12.905126 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:06:12.905126 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:06:12.916639 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:06:12.916639 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 9 01:06:12.916639 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 9 01:06:12.916639 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 9 01:06:12.916639 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Oct 9 01:06:13.214004 systemd-networkd[766]: eth0: Gained IPv6LL Oct 9 01:06:13.285089 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 01:06:13.907053 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 9 01:06:13.907053 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 01:06:13.909719 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:06:13.909719 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:06:13.909719 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 01:06:13.909719 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 9 01:06:13.909719 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 01:06:13.909719 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 01:06:13.909719 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 9 01:06:13.909719 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 01:06:13.934711 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 01:06:13.938226 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 01:06:13.939491 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 01:06:13.939491 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 9 01:06:13.939491 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 01:06:13.939491 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:06:13.939491 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:06:13.939491 ignition[943]: INFO : files: files passed Oct 9 01:06:13.939491 ignition[943]: INFO : Ignition finished successfully Oct 9 01:06:13.940784 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 01:06:13.960948 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 01:06:13.963992 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 01:06:13.966757 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 01:06:13.968164 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 01:06:13.972006 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 01:06:13.975289 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:06:13.975289 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:06:13.977672 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:06:13.979432 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:06:13.980527 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 01:06:13.986937 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 01:06:14.005635 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 01:06:14.005771 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 01:06:14.007466 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 01:06:14.008760 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 01:06:14.010194 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 01:06:14.011921 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 01:06:14.025735 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:06:14.028051 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 01:06:14.039222 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:06:14.040172 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:06:14.041643 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 01:06:14.042972 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 01:06:14.043097 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:06:14.044939 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 01:06:14.046366 systemd[1]: Stopped target basic.target - Basic System. Oct 9 01:06:14.047545 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 01:06:14.048783 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:06:14.050257 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 01:06:14.051712 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 01:06:14.053079 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:06:14.054521 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 01:06:14.055934 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 01:06:14.057197 systemd[1]: Stopped target swap.target - Swaps. Oct 9 01:06:14.058452 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 01:06:14.058580 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:06:14.060384 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:06:14.061991 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:06:14.063409 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 01:06:14.067804 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:06:14.068768 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 01:06:14.068898 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 01:06:14.071112 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 01:06:14.071231 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:06:14.072685 systemd[1]: Stopped target paths.target - Path Units. Oct 9 01:06:14.073840 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 01:06:14.078813 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:06:14.079800 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 01:06:14.081386 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 01:06:14.082627 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 01:06:14.082723 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:06:14.083839 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 01:06:14.083920 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:06:14.085098 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 01:06:14.085205 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:06:14.086549 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 01:06:14.086643 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 01:06:14.101951 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 01:06:14.103367 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 01:06:14.104033 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 01:06:14.104151 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:06:14.105596 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 01:06:14.105698 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:06:14.110606 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 01:06:14.110704 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 01:06:14.114297 ignition[999]: INFO : Ignition 2.19.0 Oct 9 01:06:14.114297 ignition[999]: INFO : Stage: umount Oct 9 01:06:14.115794 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:06:14.115794 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:06:14.115794 ignition[999]: INFO : umount: umount passed Oct 9 01:06:14.115794 ignition[999]: INFO : Ignition finished successfully Oct 9 01:06:14.117553 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 01:06:14.118494 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 01:06:14.118605 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 01:06:14.119806 systemd[1]: Stopped target network.target - Network. Oct 9 01:06:14.120855 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 01:06:14.120910 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 01:06:14.122654 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 01:06:14.122698 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 01:06:14.124023 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 01:06:14.124061 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 01:06:14.126015 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 01:06:14.126061 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 01:06:14.127407 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 01:06:14.128691 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 01:06:14.133824 systemd-networkd[766]: eth0: DHCPv6 lease lost Oct 9 01:06:14.136233 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 01:06:14.136381 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 01:06:14.138016 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 01:06:14.138817 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 01:06:14.141164 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 01:06:14.141233 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:06:14.152886 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 01:06:14.153565 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 01:06:14.153626 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:06:14.155203 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 01:06:14.155243 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:06:14.156558 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 01:06:14.156594 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 01:06:14.158295 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 01:06:14.158335 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:06:14.159835 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:06:14.168942 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 01:06:14.169080 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 01:06:14.179425 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 01:06:14.179587 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:06:14.181389 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 01:06:14.181432 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 01:06:14.182623 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 01:06:14.182653 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:06:14.183966 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 01:06:14.184012 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:06:14.186090 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 01:06:14.186133 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 01:06:14.188055 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:06:14.188095 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:06:14.194978 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 01:06:14.195762 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 01:06:14.195814 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:06:14.197493 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 01:06:14.197531 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:06:14.199115 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 01:06:14.199157 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:06:14.200740 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:06:14.200794 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:06:14.202569 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 01:06:14.202688 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 01:06:14.204087 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 01:06:14.204185 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 01:06:14.207137 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 01:06:14.208582 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 01:06:14.208640 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 01:06:14.210687 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 01:06:14.221373 systemd[1]: Switching root. Oct 9 01:06:14.258572 systemd-journald[238]: Journal stopped Oct 9 01:06:14.927197 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Oct 9 01:06:14.927247 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 01:06:14.927271 kernel: SELinux: policy capability open_perms=1 Oct 9 01:06:14.927281 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 01:06:14.927290 kernel: SELinux: policy capability always_check_network=0 Oct 9 01:06:14.927300 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 01:06:14.927309 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 01:06:14.927319 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 01:06:14.927331 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 01:06:14.927341 kernel: audit: type=1403 audit(1728435974.409:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 01:06:14.927351 systemd[1]: Successfully loaded SELinux policy in 36.675ms. Oct 9 01:06:14.927371 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.158ms. Oct 9 01:06:14.927382 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:06:14.927393 systemd[1]: Detected virtualization kvm. Oct 9 01:06:14.927408 systemd[1]: Detected architecture arm64. Oct 9 01:06:14.927419 systemd[1]: Detected first boot. Oct 9 01:06:14.927429 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:06:14.927451 zram_generator::config[1044]: No configuration found. Oct 9 01:06:14.927463 systemd[1]: Populated /etc with preset unit settings. Oct 9 01:06:14.927473 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 01:06:14.927484 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 01:06:14.927494 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 01:06:14.927505 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 01:06:14.927516 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 01:06:14.927526 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 01:06:14.927538 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 01:06:14.927549 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 01:06:14.927559 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 01:06:14.927570 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 01:06:14.927580 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 01:06:14.927594 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:06:14.927604 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:06:14.927615 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 01:06:14.927626 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 01:06:14.927638 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 01:06:14.927649 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:06:14.927660 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 9 01:06:14.927670 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:06:14.927681 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 01:06:14.927691 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 01:06:14.927701 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 01:06:14.927713 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 01:06:14.927724 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:06:14.927734 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:06:14.927772 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:06:14.927785 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:06:14.927796 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 01:06:14.927806 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 01:06:14.927816 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:06:14.927827 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:06:14.927838 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:06:14.927851 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 01:06:14.927861 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 01:06:14.927872 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 01:06:14.927882 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 01:06:14.927892 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 01:06:14.927902 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 01:06:14.927913 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 01:06:14.927923 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 01:06:14.927935 systemd[1]: Reached target machines.target - Containers. Oct 9 01:06:14.927945 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 01:06:14.927958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:06:14.927969 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:06:14.927979 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 01:06:14.927990 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:06:14.928001 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:06:14.928012 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:06:14.928022 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 01:06:14.928034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:06:14.928044 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 01:06:14.928054 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 01:06:14.928065 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 01:06:14.928075 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 01:06:14.928085 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 01:06:14.928095 kernel: fuse: init (API version 7.39) Oct 9 01:06:14.928104 kernel: loop: module loaded Oct 9 01:06:14.928115 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:06:14.928126 kernel: ACPI: bus type drm_connector registered Oct 9 01:06:14.928135 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:06:14.928145 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 01:06:14.928156 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 01:06:14.928166 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:06:14.928192 systemd-journald[1111]: Collecting audit messages is disabled. Oct 9 01:06:14.928213 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 01:06:14.928226 systemd[1]: Stopped verity-setup.service. Oct 9 01:06:14.928236 systemd-journald[1111]: Journal started Oct 9 01:06:14.928257 systemd-journald[1111]: Runtime Journal (/run/log/journal/54be862274cb47d2af516b2e27efce1e) is 5.9M, max 47.3M, 41.4M free. Oct 9 01:06:14.756034 systemd[1]: Queued start job for default target multi-user.target. Oct 9 01:06:14.775616 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 01:06:14.775974 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 01:06:14.930760 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:06:14.931122 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 01:06:14.932023 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 01:06:14.932953 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 01:06:14.933775 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 01:06:14.934653 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 01:06:14.935641 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 01:06:14.937786 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 01:06:14.938875 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:06:14.940009 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 01:06:14.940147 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 01:06:14.941278 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:06:14.941410 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:06:14.942511 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:06:14.942649 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:06:14.943818 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:06:14.943948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:06:14.945031 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 01:06:14.945158 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 01:06:14.946325 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:06:14.946485 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:06:14.947541 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:06:14.948726 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 01:06:14.949866 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 01:06:14.961775 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 01:06:14.971833 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 01:06:14.973587 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 01:06:14.974422 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 01:06:14.974461 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:06:14.976080 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 01:06:14.978064 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 01:06:14.979804 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 01:06:14.980630 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:06:14.981997 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 01:06:14.984918 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 01:06:14.985758 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:06:14.987593 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 01:06:14.989298 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:06:14.993230 systemd-journald[1111]: Time spent on flushing to /var/log/journal/54be862274cb47d2af516b2e27efce1e is 20.848ms for 855 entries. Oct 9 01:06:14.993230 systemd-journald[1111]: System Journal (/var/log/journal/54be862274cb47d2af516b2e27efce1e) is 8.0M, max 195.6M, 187.6M free. Oct 9 01:06:15.036272 systemd-journald[1111]: Received client request to flush runtime journal. Oct 9 01:06:15.036877 kernel: loop0: detected capacity change from 0 to 113456 Oct 9 01:06:15.036906 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 01:06:14.993426 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:06:15.002021 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 01:06:15.008015 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:06:15.014811 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:06:15.016577 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 01:06:15.018177 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 01:06:15.019395 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 01:06:15.020793 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 01:06:15.024079 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:06:15.028064 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 01:06:15.037474 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Oct 9 01:06:15.037484 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Oct 9 01:06:15.039911 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 01:06:15.044944 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 01:06:15.046325 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 01:06:15.048212 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:06:15.057211 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 01:06:15.058710 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 01:06:15.059318 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 01:06:15.063784 kernel: loop1: detected capacity change from 0 to 116808 Oct 9 01:06:15.064276 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 01:06:15.075452 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 01:06:15.083980 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:06:15.094684 kernel: loop2: detected capacity change from 0 to 194096 Oct 9 01:06:15.096594 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Oct 9 01:06:15.096610 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Oct 9 01:06:15.100407 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:06:15.142784 kernel: loop3: detected capacity change from 0 to 113456 Oct 9 01:06:15.148760 kernel: loop4: detected capacity change from 0 to 116808 Oct 9 01:06:15.153759 kernel: loop5: detected capacity change from 0 to 194096 Oct 9 01:06:15.157725 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 01:06:15.158234 (sd-merge)[1183]: Merged extensions into '/usr'. Oct 9 01:06:15.163064 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 01:06:15.163076 systemd[1]: Reloading... Oct 9 01:06:15.211791 zram_generator::config[1207]: No configuration found. Oct 9 01:06:15.249365 ldconfig[1150]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 01:06:15.308258 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:06:15.343698 systemd[1]: Reloading finished in 180 ms. Oct 9 01:06:15.387033 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 01:06:15.388135 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 01:06:15.402174 systemd[1]: Starting ensure-sysext.service... Oct 9 01:06:15.404028 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:06:15.414729 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Oct 9 01:06:15.414757 systemd[1]: Reloading... Oct 9 01:06:15.421913 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 01:06:15.422461 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 01:06:15.423173 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 01:06:15.423472 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Oct 9 01:06:15.423598 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Oct 9 01:06:15.425901 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:06:15.425991 systemd-tmpfiles[1246]: Skipping /boot Oct 9 01:06:15.432651 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:06:15.432873 systemd-tmpfiles[1246]: Skipping /boot Oct 9 01:06:15.464822 zram_generator::config[1273]: No configuration found. Oct 9 01:06:15.540068 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:06:15.574969 systemd[1]: Reloading finished in 159 ms. Oct 9 01:06:15.592821 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 01:06:15.600217 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:06:15.607373 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:06:15.609500 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 01:06:15.611784 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 01:06:15.617089 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:06:15.630120 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:06:15.635906 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 01:06:15.637660 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 01:06:15.641057 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:06:15.642409 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:06:15.644130 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:06:15.647008 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:06:15.647895 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:06:15.651067 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 01:06:15.653887 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 01:06:15.655692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:06:15.655853 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:06:15.657115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:06:15.657236 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:06:15.658601 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:06:15.658713 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:06:15.659073 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Oct 9 01:06:15.665805 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:06:15.683007 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:06:15.686190 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:06:15.690966 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:06:15.691803 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:06:15.692383 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:06:15.693723 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 01:06:15.695773 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 01:06:15.697154 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 01:06:15.701184 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 01:06:15.702626 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:06:15.702800 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:06:15.704002 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:06:15.704115 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:06:15.705469 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:06:15.705590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:06:15.723053 systemd[1]: Finished ensure-sysext.service. Oct 9 01:06:15.726733 augenrules[1375]: No rules Oct 9 01:06:15.727342 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 9 01:06:15.727596 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:06:15.735939 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:06:15.739221 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:06:15.742119 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:06:15.746027 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:06:15.747810 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1365) Oct 9 01:06:15.749970 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:06:15.750853 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1365) Oct 9 01:06:15.755941 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:06:15.760020 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 01:06:15.762676 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 01:06:15.763184 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:06:15.763346 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:06:15.764346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:06:15.764482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:06:15.765777 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1366) Oct 9 01:06:15.766653 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:06:15.766784 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:06:15.768917 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:06:15.769046 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:06:15.770399 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:06:15.770535 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:06:15.781166 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:06:15.781224 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:06:15.786038 systemd-resolved[1313]: Positive Trust Anchors: Oct 9 01:06:15.787851 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:06:15.787883 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:06:15.804947 systemd-resolved[1313]: Defaulting to hostname 'linux'. Oct 9 01:06:15.821472 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:06:15.822507 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:06:15.826683 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 01:06:15.837488 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 01:06:15.839522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:06:15.840596 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 01:06:15.841891 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 01:06:15.852207 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 01:06:15.853792 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 01:06:15.858327 systemd-networkd[1389]: lo: Link UP Oct 9 01:06:15.858338 systemd-networkd[1389]: lo: Gained carrier Oct 9 01:06:15.859128 systemd-networkd[1389]: Enumeration completed Oct 9 01:06:15.864939 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 01:06:15.865871 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:06:15.866306 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:06:15.866313 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:06:15.866893 systemd-networkd[1389]: eth0: Link UP Oct 9 01:06:15.866902 systemd-networkd[1389]: eth0: Gained carrier Oct 9 01:06:15.866914 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:06:15.867073 systemd[1]: Reached target network.target - Network. Oct 9 01:06:15.868840 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 01:06:15.885785 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:06:15.887816 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Oct 9 01:06:15.888065 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:06:15.888391 systemd-timesyncd[1390]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 01:06:15.888455 systemd-timesyncd[1390]: Initial clock synchronization to Wed 2024-10-09 01:06:15.598368 UTC. Oct 9 01:06:15.900072 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:06:15.918824 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 01:06:15.919906 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:06:15.920678 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:06:15.921574 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 01:06:15.922489 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 01:06:15.923557 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 01:06:15.924426 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 01:06:15.925350 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 01:06:15.926252 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 01:06:15.926284 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:06:15.926922 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:06:15.928325 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 01:06:15.930230 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 01:06:15.943577 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 01:06:15.945446 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 01:06:15.946708 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 01:06:15.947613 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:06:15.948349 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:06:15.949045 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:06:15.949074 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:06:15.949894 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 01:06:15.951507 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 01:06:15.953864 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:06:15.953856 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 01:06:15.957520 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 01:06:15.960125 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 01:06:15.962553 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 01:06:15.964411 jq[1421]: false Oct 9 01:06:15.965512 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 01:06:15.967915 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 01:06:15.969872 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 01:06:15.975910 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 01:06:15.976108 extend-filesystems[1422]: Found loop3 Oct 9 01:06:15.979724 extend-filesystems[1422]: Found loop4 Oct 9 01:06:15.979724 extend-filesystems[1422]: Found loop5 Oct 9 01:06:15.979724 extend-filesystems[1422]: Found vda Oct 9 01:06:15.979724 extend-filesystems[1422]: Found vda1 Oct 9 01:06:15.979724 extend-filesystems[1422]: Found vda2 Oct 9 01:06:15.979724 extend-filesystems[1422]: Found vda3 Oct 9 01:06:15.979724 extend-filesystems[1422]: Found usr Oct 9 01:06:15.979724 extend-filesystems[1422]: Found vda4 Oct 9 01:06:15.979724 extend-filesystems[1422]: Found vda6 Oct 9 01:06:15.979724 extend-filesystems[1422]: Found vda7 Oct 9 01:06:15.979724 extend-filesystems[1422]: Found vda9 Oct 9 01:06:15.979724 extend-filesystems[1422]: Checking size of /dev/vda9 Oct 9 01:06:15.982112 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 01:06:15.987566 dbus-daemon[1420]: [system] SELinux support is enabled Oct 9 01:06:15.982519 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 01:06:15.985923 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 01:06:15.987694 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 01:06:15.989001 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 01:06:15.994396 extend-filesystems[1422]: Resized partition /dev/vda9 Oct 9 01:06:15.996820 extend-filesystems[1443]: resize2fs 1.47.1 (20-May-2024) Oct 9 01:06:15.996526 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 01:06:15.998450 jq[1440]: true Oct 9 01:06:15.999764 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 01:06:16.007201 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 01:06:16.007351 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 01:06:16.007587 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 01:06:16.007717 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 01:06:16.011039 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 01:06:16.011217 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 01:06:16.018887 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1345) Oct 9 01:06:16.019093 systemd-logind[1432]: Watching system buttons on /dev/input/event0 (Power Button) Oct 9 01:06:16.023081 systemd-logind[1432]: New seat seat0. Oct 9 01:06:16.025411 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 01:06:16.027416 (ntainerd)[1447]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 01:06:16.029101 dbus-daemon[1420]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 9 01:06:16.035794 jq[1446]: true Oct 9 01:06:16.038768 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 01:06:16.044983 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 01:06:16.054903 update_engine[1436]: I20241009 01:06:16.047194 1436 main.cc:92] Flatcar Update Engine starting Oct 9 01:06:16.045196 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 01:06:16.055191 update_engine[1436]: I20241009 01:06:16.055073 1436 update_check_scheduler.cc:74] Next update check in 3m29s Oct 9 01:06:16.047255 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 01:06:16.047356 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 01:06:16.055948 systemd[1]: Started update-engine.service - Update Engine. Oct 9 01:06:16.060201 extend-filesystems[1443]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 01:06:16.060201 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 01:06:16.060201 extend-filesystems[1443]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 01:06:16.062818 extend-filesystems[1422]: Resized filesystem in /dev/vda9 Oct 9 01:06:16.064989 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 01:06:16.067072 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 01:06:16.069527 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 01:06:16.073580 tar[1445]: linux-arm64/helm Oct 9 01:06:16.107968 bash[1476]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:06:16.109783 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 01:06:16.112107 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 01:06:16.115060 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 01:06:16.207052 containerd[1447]: time="2024-10-09T01:06:16.206966332Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 01:06:16.232387 containerd[1447]: time="2024-10-09T01:06:16.232352210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:06:16.233857 containerd[1447]: time="2024-10-09T01:06:16.233630075Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:06:16.233884 containerd[1447]: time="2024-10-09T01:06:16.233859527Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 01:06:16.233903 containerd[1447]: time="2024-10-09T01:06:16.233883466Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 01:06:16.234048 containerd[1447]: time="2024-10-09T01:06:16.234026642Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 01:06:16.234074 containerd[1447]: time="2024-10-09T01:06:16.234053897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 01:06:16.234127 containerd[1447]: time="2024-10-09T01:06:16.234111607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:06:16.234152 containerd[1447]: time="2024-10-09T01:06:16.234130381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:06:16.234315 containerd[1447]: time="2024-10-09T01:06:16.234295261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:06:16.234344 containerd[1447]: time="2024-10-09T01:06:16.234316695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 01:06:16.234344 containerd[1447]: time="2024-10-09T01:06:16.234334312Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:06:16.234377 containerd[1447]: time="2024-10-09T01:06:16.234344335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 01:06:16.234432 containerd[1447]: time="2024-10-09T01:06:16.234413070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:06:16.234660 containerd[1447]: time="2024-10-09T01:06:16.234629415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:06:16.234813 containerd[1447]: time="2024-10-09T01:06:16.234789938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:06:16.234849 containerd[1447]: time="2024-10-09T01:06:16.234813801Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 01:06:16.234920 containerd[1447]: time="2024-10-09T01:06:16.234903045Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 01:06:16.234965 containerd[1447]: time="2024-10-09T01:06:16.234951849Z" level=info msg="metadata content store policy set" policy=shared Oct 9 01:06:16.238366 containerd[1447]: time="2024-10-09T01:06:16.238337530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 01:06:16.238402 containerd[1447]: time="2024-10-09T01:06:16.238388686Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 01:06:16.238425 containerd[1447]: time="2024-10-09T01:06:16.238406882Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 01:06:16.238425 containerd[1447]: time="2024-10-09T01:06:16.238421840Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 01:06:16.238467 containerd[1447]: time="2024-10-09T01:06:16.238454877Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 01:06:16.238607 containerd[1447]: time="2024-10-09T01:06:16.238589610Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 01:06:16.238853 containerd[1447]: time="2024-10-09T01:06:16.238827350Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 01:06:16.238977 containerd[1447]: time="2024-10-09T01:06:16.238958383Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 01:06:16.239001 containerd[1447]: time="2024-10-09T01:06:16.238979662Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 01:06:16.239001 containerd[1447]: time="2024-10-09T01:06:16.238993965Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 01:06:16.239038 containerd[1447]: time="2024-10-09T01:06:16.239006532Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 01:06:16.239038 containerd[1447]: time="2024-10-09T01:06:16.239024381Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 01:06:16.239038 containerd[1447]: time="2024-10-09T01:06:16.239035599Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 01:06:16.239085 containerd[1447]: time="2024-10-09T01:06:16.239048051Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 01:06:16.239085 containerd[1447]: time="2024-10-09T01:06:16.239060811Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 01:06:16.239085 containerd[1447]: time="2024-10-09T01:06:16.239072916Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 01:06:16.239085 containerd[1447]: time="2024-10-09T01:06:16.239084057Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 01:06:16.239163 containerd[1447]: time="2024-10-09T01:06:16.239094812Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 01:06:16.239163 containerd[1447]: time="2024-10-09T01:06:16.239113933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239163 containerd[1447]: time="2024-10-09T01:06:16.239126346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239163 containerd[1447]: time="2024-10-09T01:06:16.239137526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239163 containerd[1447]: time="2024-10-09T01:06:16.239148783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239163 containerd[1447]: time="2024-10-09T01:06:16.239160386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239260 containerd[1447]: time="2024-10-09T01:06:16.239172183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239260 containerd[1447]: time="2024-10-09T01:06:16.239183671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239260 containerd[1447]: time="2024-10-09T01:06:16.239195467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239260 containerd[1447]: time="2024-10-09T01:06:16.239207726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239260 containerd[1447]: time="2024-10-09T01:06:16.239221180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239260 containerd[1447]: time="2024-10-09T01:06:16.239232167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239260 containerd[1447]: time="2024-10-09T01:06:16.239242845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239260 containerd[1447]: time="2024-10-09T01:06:16.239253408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239384 containerd[1447]: time="2024-10-09T01:06:16.239266515Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 01:06:16.239384 containerd[1447]: time="2024-10-09T01:06:16.239285251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239384 containerd[1447]: time="2024-10-09T01:06:16.239297548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239384 containerd[1447]: time="2024-10-09T01:06:16.239307186Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 01:06:16.239459 containerd[1447]: time="2024-10-09T01:06:16.239409344Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 01:06:16.239459 containerd[1447]: time="2024-10-09T01:06:16.239424571Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 01:06:16.239459 containerd[1447]: time="2024-10-09T01:06:16.239433477Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 01:06:16.239459 containerd[1447]: time="2024-10-09T01:06:16.239444502Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 01:06:16.239531 containerd[1447]: time="2024-10-09T01:06:16.239462158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239531 containerd[1447]: time="2024-10-09T01:06:16.239495967Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 01:06:16.239531 containerd[1447]: time="2024-10-09T01:06:16.239506028Z" level=info msg="NRI interface is disabled by configuration." Oct 9 01:06:16.239531 containerd[1447]: time="2024-10-09T01:06:16.239515165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 01:06:16.239873 containerd[1447]: time="2024-10-09T01:06:16.239828617Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 01:06:16.239873 containerd[1447]: time="2024-10-09T01:06:16.239876998Z" level=info msg="Connect containerd service" Oct 9 01:06:16.240001 containerd[1447]: time="2024-10-09T01:06:16.239905525Z" level=info msg="using legacy CRI server" Oct 9 01:06:16.240001 containerd[1447]: time="2024-10-09T01:06:16.239912271Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 01:06:16.240001 containerd[1447]: time="2024-10-09T01:06:16.239978000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 01:06:16.240612 containerd[1447]: time="2024-10-09T01:06:16.240587172Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:06:16.241210 containerd[1447]: time="2024-10-09T01:06:16.241190831Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 01:06:16.241248 containerd[1447]: time="2024-10-09T01:06:16.241234971Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 01:06:16.241794 containerd[1447]: time="2024-10-09T01:06:16.241764536Z" level=info msg="Start subscribing containerd event" Oct 9 01:06:16.241821 containerd[1447]: time="2024-10-09T01:06:16.241810411Z" level=info msg="Start recovering state" Oct 9 01:06:16.241880 containerd[1447]: time="2024-10-09T01:06:16.241866502Z" level=info msg="Start event monitor" Oct 9 01:06:16.241904 containerd[1447]: time="2024-10-09T01:06:16.241889054Z" level=info msg="Start snapshots syncer" Oct 9 01:06:16.241904 containerd[1447]: time="2024-10-09T01:06:16.241898229Z" level=info msg="Start cni network conf syncer for default" Oct 9 01:06:16.241937 containerd[1447]: time="2024-10-09T01:06:16.241904436Z" level=info msg="Start streaming server" Oct 9 01:06:16.242028 containerd[1447]: time="2024-10-09T01:06:16.242016116Z" level=info msg="containerd successfully booted in 0.036669s" Oct 9 01:06:16.242292 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 01:06:16.392344 tar[1445]: linux-arm64/LICENSE Oct 9 01:06:16.392415 tar[1445]: linux-arm64/README.md Oct 9 01:06:16.404835 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 01:06:16.551427 sshd_keygen[1439]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 01:06:16.569783 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 01:06:16.579048 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 01:06:16.584229 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 01:06:16.584402 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 01:06:16.586797 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 01:06:16.599783 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 01:06:16.602161 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 01:06:16.604063 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 9 01:06:16.605201 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 01:06:17.629853 systemd-networkd[1389]: eth0: Gained IPv6LL Oct 9 01:06:17.632437 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 01:06:17.633883 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 01:06:17.638946 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 01:06:17.640893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:17.642540 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 01:06:17.655085 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 01:06:17.655288 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 01:06:17.656474 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 01:06:17.659550 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 01:06:18.109402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:18.110572 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 01:06:18.112891 (kubelet)[1533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:06:18.112900 systemd[1]: Startup finished in 515ms (kernel) + 4.700s (initrd) + 3.746s (userspace) = 8.962s. Oct 9 01:06:18.546374 kubelet[1533]: E1009 01:06:18.546281 1533 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:06:18.549190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:06:18.549335 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:06:22.101306 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 01:06:22.102392 systemd[1]: Started sshd@0-10.0.0.142:22-10.0.0.1:60798.service - OpenSSH per-connection server daemon (10.0.0.1:60798). Oct 9 01:06:22.148013 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 60798 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:06:22.151268 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:22.158836 systemd-logind[1432]: New session 1 of user core. Oct 9 01:06:22.159737 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 01:06:22.170093 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 01:06:22.179781 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 01:06:22.181805 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 01:06:22.188083 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 01:06:22.260662 systemd[1552]: Queued start job for default target default.target. Oct 9 01:06:22.268817 systemd[1552]: Created slice app.slice - User Application Slice. Oct 9 01:06:22.268863 systemd[1552]: Reached target paths.target - Paths. Oct 9 01:06:22.268875 systemd[1552]: Reached target timers.target - Timers. Oct 9 01:06:22.270091 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 01:06:22.279614 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 01:06:22.279674 systemd[1552]: Reached target sockets.target - Sockets. Oct 9 01:06:22.279685 systemd[1552]: Reached target basic.target - Basic System. Oct 9 01:06:22.279719 systemd[1552]: Reached target default.target - Main User Target. Oct 9 01:06:22.279764 systemd[1552]: Startup finished in 86ms. Oct 9 01:06:22.280021 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 01:06:22.281298 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 01:06:22.343216 systemd[1]: Started sshd@1-10.0.0.142:22-10.0.0.1:60812.service - OpenSSH per-connection server daemon (10.0.0.1:60812). Oct 9 01:06:22.381473 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 60812 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:06:22.382514 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:22.386272 systemd-logind[1432]: New session 2 of user core. Oct 9 01:06:22.393863 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 01:06:22.443417 sshd[1563]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:22.453943 systemd[1]: sshd@1-10.0.0.142:22-10.0.0.1:60812.service: Deactivated successfully. Oct 9 01:06:22.455387 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 01:06:22.456481 systemd-logind[1432]: Session 2 logged out. Waiting for processes to exit. Oct 9 01:06:22.457507 systemd[1]: Started sshd@2-10.0.0.142:22-10.0.0.1:60826.service - OpenSSH per-connection server daemon (10.0.0.1:60826). Oct 9 01:06:22.458192 systemd-logind[1432]: Removed session 2. Oct 9 01:06:22.494090 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 60826 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:06:22.495201 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:22.498789 systemd-logind[1432]: New session 3 of user core. Oct 9 01:06:22.504843 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 01:06:22.551096 sshd[1570]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:22.561884 systemd[1]: sshd@2-10.0.0.142:22-10.0.0.1:60826.service: Deactivated successfully. Oct 9 01:06:22.563156 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 01:06:22.564339 systemd-logind[1432]: Session 3 logged out. Waiting for processes to exit. Oct 9 01:06:22.565346 systemd[1]: Started sshd@3-10.0.0.142:22-10.0.0.1:60832.service - OpenSSH per-connection server daemon (10.0.0.1:60832). Oct 9 01:06:22.566045 systemd-logind[1432]: Removed session 3. Oct 9 01:06:22.602874 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 60832 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:06:22.604014 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:22.607549 systemd-logind[1432]: New session 4 of user core. Oct 9 01:06:22.617928 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 01:06:22.668919 sshd[1577]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:22.678914 systemd[1]: sshd@3-10.0.0.142:22-10.0.0.1:60832.service: Deactivated successfully. Oct 9 01:06:22.680308 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 01:06:22.681441 systemd-logind[1432]: Session 4 logged out. Waiting for processes to exit. Oct 9 01:06:22.682488 systemd[1]: Started sshd@4-10.0.0.142:22-10.0.0.1:35398.service - OpenSSH per-connection server daemon (10.0.0.1:35398). Oct 9 01:06:22.683141 systemd-logind[1432]: Removed session 4. Oct 9 01:06:22.719909 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 35398 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:06:22.721145 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:22.724797 systemd-logind[1432]: New session 5 of user core. Oct 9 01:06:22.733883 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 01:06:22.793227 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 01:06:22.793491 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:06:22.815546 sudo[1587]: pam_unix(sudo:session): session closed for user root Oct 9 01:06:22.817304 sshd[1584]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:22.836056 systemd[1]: sshd@4-10.0.0.142:22-10.0.0.1:35398.service: Deactivated successfully. Oct 9 01:06:22.837538 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 01:06:22.839114 systemd-logind[1432]: Session 5 logged out. Waiting for processes to exit. Oct 9 01:06:22.857148 systemd[1]: Started sshd@5-10.0.0.142:22-10.0.0.1:35414.service - OpenSSH per-connection server daemon (10.0.0.1:35414). Oct 9 01:06:22.857960 systemd-logind[1432]: Removed session 5. Oct 9 01:06:22.890628 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 35414 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:06:22.891894 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:22.895680 systemd-logind[1432]: New session 6 of user core. Oct 9 01:06:22.903939 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 01:06:22.955092 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 01:06:22.955409 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:06:22.959463 sudo[1597]: pam_unix(sudo:session): session closed for user root Oct 9 01:06:22.964344 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 01:06:22.964921 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:06:22.981127 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:06:23.001692 augenrules[1619]: No rules Oct 9 01:06:23.002807 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:06:23.002993 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:06:23.003874 sudo[1596]: pam_unix(sudo:session): session closed for user root Oct 9 01:06:23.005294 sshd[1592]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:23.016904 systemd[1]: sshd@5-10.0.0.142:22-10.0.0.1:35414.service: Deactivated successfully. Oct 9 01:06:23.018288 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 01:06:23.019986 systemd-logind[1432]: Session 6 logged out. Waiting for processes to exit. Oct 9 01:06:23.026105 systemd[1]: Started sshd@6-10.0.0.142:22-10.0.0.1:35424.service - OpenSSH per-connection server daemon (10.0.0.1:35424). Oct 9 01:06:23.029159 systemd-logind[1432]: Removed session 6. Oct 9 01:06:23.060043 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 35424 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:06:23.061188 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:23.064797 systemd-logind[1432]: New session 7 of user core. Oct 9 01:06:23.084893 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 01:06:23.134506 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 01:06:23.134834 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:06:23.437967 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 01:06:23.438054 (dockerd)[1651]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 01:06:23.670929 dockerd[1651]: time="2024-10-09T01:06:23.670871278Z" level=info msg="Starting up" Oct 9 01:06:23.810676 dockerd[1651]: time="2024-10-09T01:06:23.810531070Z" level=info msg="Loading containers: start." Oct 9 01:06:23.946762 kernel: Initializing XFRM netlink socket Oct 9 01:06:24.009795 systemd-networkd[1389]: docker0: Link UP Oct 9 01:06:24.045905 dockerd[1651]: time="2024-10-09T01:06:24.045856397Z" level=info msg="Loading containers: done." Oct 9 01:06:24.060340 dockerd[1651]: time="2024-10-09T01:06:24.060290973Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 01:06:24.060467 dockerd[1651]: time="2024-10-09T01:06:24.060389451Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 01:06:24.060506 dockerd[1651]: time="2024-10-09T01:06:24.060486902Z" level=info msg="Daemon has completed initialization" Oct 9 01:06:24.087001 dockerd[1651]: time="2024-10-09T01:06:24.086880473Z" level=info msg="API listen on /run/docker.sock" Oct 9 01:06:24.087105 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 01:06:24.747504 containerd[1447]: time="2024-10-09T01:06:24.747465538Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 9 01:06:25.405820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2169636355.mount: Deactivated successfully. Oct 9 01:06:26.650020 containerd[1447]: time="2024-10-09T01:06:26.649966410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:26.653496 containerd[1447]: time="2024-10-09T01:06:26.653255508Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=29945964" Oct 9 01:06:26.654425 containerd[1447]: time="2024-10-09T01:06:26.654392840Z" level=info msg="ImageCreate event name:\"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:26.657152 containerd[1447]: time="2024-10-09T01:06:26.657121097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:26.658497 containerd[1447]: time="2024-10-09T01:06:26.658377166Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"29942762\" in 1.910868345s" Oct 9 01:06:26.658497 containerd[1447]: time="2024-10-09T01:06:26.658413932Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\"" Oct 9 01:06:26.676811 containerd[1447]: time="2024-10-09T01:06:26.676772329Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 9 01:06:28.262651 containerd[1447]: time="2024-10-09T01:06:28.262606297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:28.263205 containerd[1447]: time="2024-10-09T01:06:28.263160938Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=26885775" Oct 9 01:06:28.263916 containerd[1447]: time="2024-10-09T01:06:28.263874769Z" level=info msg="ImageCreate event name:\"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:28.266693 containerd[1447]: time="2024-10-09T01:06:28.266665765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:28.267852 containerd[1447]: time="2024-10-09T01:06:28.267822579Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"28373587\" in 1.591011642s" Oct 9 01:06:28.267903 containerd[1447]: time="2024-10-09T01:06:28.267858991Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\"" Oct 9 01:06:28.286998 containerd[1447]: time="2024-10-09T01:06:28.286960130Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 9 01:06:28.799603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 01:06:28.808917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:28.898632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:28.902396 (kubelet)[1938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:06:28.943196 kubelet[1938]: E1009 01:06:28.943137 1938 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:06:28.946421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:06:28.946569 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:06:29.502055 containerd[1447]: time="2024-10-09T01:06:29.501986541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:29.502931 containerd[1447]: time="2024-10-09T01:06:29.502877613Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=16154274" Oct 9 01:06:29.503815 containerd[1447]: time="2024-10-09T01:06:29.503770752Z" level=info msg="ImageCreate event name:\"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:29.506635 containerd[1447]: time="2024-10-09T01:06:29.506604495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:29.508762 containerd[1447]: time="2024-10-09T01:06:29.508698117Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"17642104\" in 1.221705697s" Oct 9 01:06:29.508762 containerd[1447]: time="2024-10-09T01:06:29.508726256Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\"" Oct 9 01:06:29.526779 containerd[1447]: time="2024-10-09T01:06:29.526737393Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 9 01:06:30.660717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3864632537.mount: Deactivated successfully. Oct 9 01:06:30.987211 containerd[1447]: time="2024-10-09T01:06:30.987084318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:30.987988 containerd[1447]: time="2024-10-09T01:06:30.987961508Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=25648343" Oct 9 01:06:30.988860 containerd[1447]: time="2024-10-09T01:06:30.988832692Z" level=info msg="ImageCreate event name:\"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:30.991089 containerd[1447]: time="2024-10-09T01:06:30.991032886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:30.991719 containerd[1447]: time="2024-10-09T01:06:30.991585421Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"25647360\" in 1.464790979s" Oct 9 01:06:30.991719 containerd[1447]: time="2024-10-09T01:06:30.991619270Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\"" Oct 9 01:06:31.009689 containerd[1447]: time="2024-10-09T01:06:31.009659891Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 01:06:31.618507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount481642361.mount: Deactivated successfully. Oct 9 01:06:32.242664 containerd[1447]: time="2024-10-09T01:06:32.242612937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:32.243160 containerd[1447]: time="2024-10-09T01:06:32.243110559Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 9 01:06:32.244223 containerd[1447]: time="2024-10-09T01:06:32.244192509Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:32.250334 containerd[1447]: time="2024-10-09T01:06:32.250294168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:32.251366 containerd[1447]: time="2024-10-09T01:06:32.251339316Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.241646621s" Oct 9 01:06:32.251421 containerd[1447]: time="2024-10-09T01:06:32.251371418Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 9 01:06:32.269692 containerd[1447]: time="2024-10-09T01:06:32.269659786Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 01:06:32.690645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405204828.mount: Deactivated successfully. Oct 9 01:06:32.694441 containerd[1447]: time="2024-10-09T01:06:32.694398426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:32.695453 containerd[1447]: time="2024-10-09T01:06:32.695411034Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Oct 9 01:06:32.696307 containerd[1447]: time="2024-10-09T01:06:32.696262774Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:32.698491 containerd[1447]: time="2024-10-09T01:06:32.698432609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:32.699606 containerd[1447]: time="2024-10-09T01:06:32.699553472Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 429.862063ms" Oct 9 01:06:32.699606 containerd[1447]: time="2024-10-09T01:06:32.699601346Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 9 01:06:32.717727 containerd[1447]: time="2024-10-09T01:06:32.717678700Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 9 01:06:33.320374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3016210955.mount: Deactivated successfully. Oct 9 01:06:35.746550 containerd[1447]: time="2024-10-09T01:06:35.746492877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:35.747923 containerd[1447]: time="2024-10-09T01:06:35.747874824Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Oct 9 01:06:35.748949 containerd[1447]: time="2024-10-09T01:06:35.748890584Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:35.752031 containerd[1447]: time="2024-10-09T01:06:35.751975356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:35.753440 containerd[1447]: time="2024-10-09T01:06:35.753306888Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.03558739s" Oct 9 01:06:35.753440 containerd[1447]: time="2024-10-09T01:06:35.753343782Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Oct 9 01:06:39.196998 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 01:06:39.206935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:39.296863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:39.300287 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:06:39.336498 kubelet[2160]: E1009 01:06:39.336360 2160 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:06:39.339149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:06:39.339290 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:06:41.140235 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:41.150160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:41.165528 systemd[1]: Reloading requested from client PID 2175 ('systemctl') (unit session-7.scope)... Oct 9 01:06:41.165543 systemd[1]: Reloading... Oct 9 01:06:41.233502 zram_generator::config[2217]: No configuration found. Oct 9 01:06:41.392341 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:06:41.443141 systemd[1]: Reloading finished in 277 ms. Oct 9 01:06:41.483540 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:41.485791 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:06:41.485978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:41.487327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:41.580574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:41.584370 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:06:41.627174 kubelet[2261]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:06:41.627174 kubelet[2261]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:06:41.627174 kubelet[2261]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:06:41.627478 kubelet[2261]: I1009 01:06:41.627272 2261 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:06:42.054465 kubelet[2261]: I1009 01:06:42.054309 2261 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 01:06:42.054465 kubelet[2261]: I1009 01:06:42.054414 2261 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:06:42.054911 kubelet[2261]: I1009 01:06:42.054891 2261 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 01:06:42.102628 kubelet[2261]: I1009 01:06:42.102530 2261 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:06:42.102628 kubelet[2261]: E1009 01:06:42.102560 2261 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:42.112548 kubelet[2261]: I1009 01:06:42.112524 2261 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:06:42.113031 kubelet[2261]: I1009 01:06:42.113003 2261 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:06:42.113665 kubelet[2261]: I1009 01:06:42.113110 2261 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:06:42.113665 kubelet[2261]: I1009 01:06:42.113339 2261 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:06:42.113665 kubelet[2261]: I1009 01:06:42.113347 2261 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:06:42.113665 kubelet[2261]: I1009 01:06:42.113578 2261 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:06:42.114562 kubelet[2261]: I1009 01:06:42.114443 2261 kubelet.go:400] "Attempting to sync node with API server" Oct 9 01:06:42.114562 kubelet[2261]: I1009 01:06:42.114463 2261 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:06:42.114658 kubelet[2261]: I1009 01:06:42.114647 2261 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:06:42.115420 kubelet[2261]: I1009 01:06:42.114726 2261 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:06:42.115420 kubelet[2261]: W1009 01:06:42.115150 2261 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:42.115420 kubelet[2261]: E1009 01:06:42.115199 2261 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:42.116553 kubelet[2261]: I1009 01:06:42.116004 2261 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:06:42.116553 kubelet[2261]: I1009 01:06:42.116490 2261 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:06:42.117018 kubelet[2261]: W1009 01:06:42.117005 2261 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 01:06:42.118032 kubelet[2261]: I1009 01:06:42.118018 2261 server.go:1264] "Started kubelet" Oct 9 01:06:42.118238 kubelet[2261]: W1009 01:06:42.118148 2261 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:42.118238 kubelet[2261]: E1009 01:06:42.118203 2261 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:42.118301 kubelet[2261]: I1009 01:06:42.118233 2261 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:06:42.118882 kubelet[2261]: I1009 01:06:42.118842 2261 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:06:42.121789 kubelet[2261]: I1009 01:06:42.121621 2261 server.go:455] "Adding debug handlers to kubelet server" Oct 9 01:06:42.124985 kubelet[2261]: I1009 01:06:42.124957 2261 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:06:42.127900 kubelet[2261]: I1009 01:06:42.127880 2261 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:06:42.129526 kubelet[2261]: I1009 01:06:42.129505 2261 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:06:42.129737 kubelet[2261]: I1009 01:06:42.129721 2261 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 01:06:42.129984 kubelet[2261]: I1009 01:06:42.129973 2261 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:06:42.130352 kubelet[2261]: W1009 01:06:42.130315 2261 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:42.131478 kubelet[2261]: E1009 01:06:42.131445 2261 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:42.131478 kubelet[2261]: E1009 01:06:42.129123 2261 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.142:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.142:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fca362e33d5b70 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 01:06:42.117999472 +0000 UTC m=+0.530639822,LastTimestamp:2024-10-09 01:06:42.117999472 +0000 UTC m=+0.530639822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 01:06:42.131595 kubelet[2261]: E1009 01:06:42.131536 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="200ms" Oct 9 01:06:42.132688 kubelet[2261]: I1009 01:06:42.132664 2261 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:06:42.133123 kubelet[2261]: E1009 01:06:42.133098 2261 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:06:42.133123 kubelet[2261]: I1009 01:06:42.133010 2261 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:06:42.133999 kubelet[2261]: I1009 01:06:42.133961 2261 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:06:42.142796 kubelet[2261]: I1009 01:06:42.142757 2261 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:06:42.143760 kubelet[2261]: I1009 01:06:42.143703 2261 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:06:42.143873 kubelet[2261]: I1009 01:06:42.143862 2261 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:06:42.143897 kubelet[2261]: I1009 01:06:42.143885 2261 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 01:06:42.143937 kubelet[2261]: E1009 01:06:42.143922 2261 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:06:42.146818 kubelet[2261]: W1009 01:06:42.146644 2261 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:42.147619 kubelet[2261]: E1009 01:06:42.146678 2261 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:42.147619 kubelet[2261]: I1009 01:06:42.147475 2261 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:06:42.147619 kubelet[2261]: I1009 01:06:42.147485 2261 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:06:42.147619 kubelet[2261]: I1009 01:06:42.147500 2261 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:06:42.214416 kubelet[2261]: I1009 01:06:42.214368 2261 policy_none.go:49] "None policy: Start" Oct 9 01:06:42.215194 kubelet[2261]: I1009 01:06:42.215175 2261 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:06:42.215255 kubelet[2261]: I1009 01:06:42.215205 2261 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:06:42.220941 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 01:06:42.230703 kubelet[2261]: I1009 01:06:42.230656 2261 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:06:42.231012 kubelet[2261]: E1009 01:06:42.230980 2261 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Oct 9 01:06:42.237121 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 01:06:42.239632 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 01:06:42.244481 kubelet[2261]: E1009 01:06:42.244446 2261 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 01:06:42.246476 kubelet[2261]: I1009 01:06:42.246438 2261 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:06:42.246666 kubelet[2261]: I1009 01:06:42.246619 2261 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:06:42.246937 kubelet[2261]: I1009 01:06:42.246724 2261 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:06:42.248562 kubelet[2261]: E1009 01:06:42.248530 2261 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 01:06:42.332372 kubelet[2261]: E1009 01:06:42.332258 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="400ms" Oct 9 01:06:42.432613 kubelet[2261]: I1009 01:06:42.432579 2261 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:06:42.432950 kubelet[2261]: E1009 01:06:42.432928 2261 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Oct 9 01:06:42.445251 kubelet[2261]: I1009 01:06:42.445180 2261 topology_manager.go:215] "Topology Admit Handler" podUID="1c90a50c8bbd4577796b947041df7613" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 01:06:42.446085 kubelet[2261]: I1009 01:06:42.446037 2261 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 01:06:42.446921 kubelet[2261]: I1009 01:06:42.446889 2261 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 01:06:42.451289 systemd[1]: Created slice kubepods-burstable-pod1c90a50c8bbd4577796b947041df7613.slice - libcontainer container kubepods-burstable-pod1c90a50c8bbd4577796b947041df7613.slice. Oct 9 01:06:42.475185 systemd[1]: Created slice kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice - libcontainer container kubepods-burstable-pode5c757a7a09759fc423ca409747c56ae.slice. Oct 9 01:06:42.494025 systemd[1]: Created slice kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice - libcontainer container kubepods-burstable-pod2fcea4df269cc1e6513f9e3e768ded5a.slice. Oct 9 01:06:42.532578 kubelet[2261]: I1009 01:06:42.532534 2261 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c90a50c8bbd4577796b947041df7613-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1c90a50c8bbd4577796b947041df7613\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:42.532578 kubelet[2261]: I1009 01:06:42.532573 2261 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:42.532703 kubelet[2261]: I1009 01:06:42.532613 2261 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:42.532703 kubelet[2261]: I1009 01:06:42.532634 2261 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 9 01:06:42.532703 kubelet[2261]: I1009 01:06:42.532653 2261 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c90a50c8bbd4577796b947041df7613-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1c90a50c8bbd4577796b947041df7613\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:42.532703 kubelet[2261]: I1009 01:06:42.532666 2261 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c90a50c8bbd4577796b947041df7613-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1c90a50c8bbd4577796b947041df7613\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:42.532703 kubelet[2261]: I1009 01:06:42.532682 2261 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:42.532819 kubelet[2261]: I1009 01:06:42.532696 2261 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:42.532819 kubelet[2261]: I1009 01:06:42.532710 2261 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:42.733234 kubelet[2261]: E1009 01:06:42.733116 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="800ms" Oct 9 01:06:42.773555 kubelet[2261]: E1009 01:06:42.773522 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:42.775854 containerd[1447]: time="2024-10-09T01:06:42.775806778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1c90a50c8bbd4577796b947041df7613,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:42.793035 kubelet[2261]: E1009 01:06:42.792995 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:42.793427 containerd[1447]: time="2024-10-09T01:06:42.793384928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:42.796600 kubelet[2261]: E1009 01:06:42.796575 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:42.796884 containerd[1447]: time="2024-10-09T01:06:42.796853899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:42.835093 kubelet[2261]: I1009 01:06:42.835069 2261 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:06:42.835511 kubelet[2261]: E1009 01:06:42.835477 2261 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Oct 9 01:06:42.950434 kubelet[2261]: W1009 01:06:42.950394 2261 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:42.950434 kubelet[2261]: E1009 01:06:42.950434 2261 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:43.216114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2791718452.mount: Deactivated successfully. Oct 9 01:06:43.221797 containerd[1447]: time="2024-10-09T01:06:43.221709006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:43.222998 containerd[1447]: time="2024-10-09T01:06:43.222930122Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:43.223121 containerd[1447]: time="2024-10-09T01:06:43.223082492Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:06:43.223811 containerd[1447]: time="2024-10-09T01:06:43.223788516Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:43.224261 containerd[1447]: time="2024-10-09T01:06:43.224227443Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 9 01:06:43.224900 containerd[1447]: time="2024-10-09T01:06:43.224865653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:06:43.225560 containerd[1447]: time="2024-10-09T01:06:43.225519569Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:43.229643 containerd[1447]: time="2024-10-09T01:06:43.229595310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:43.230913 containerd[1447]: time="2024-10-09T01:06:43.230466531Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 437.018872ms" Oct 9 01:06:43.231290 containerd[1447]: time="2024-10-09T01:06:43.231261388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 434.350433ms" Oct 9 01:06:43.233661 containerd[1447]: time="2024-10-09T01:06:43.233615387Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 457.734291ms" Oct 9 01:06:43.268287 kubelet[2261]: W1009 01:06:43.268222 2261 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:43.268287 kubelet[2261]: E1009 01:06:43.268290 2261 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:43.373059 containerd[1447]: time="2024-10-09T01:06:43.372657616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:43.373059 containerd[1447]: time="2024-10-09T01:06:43.372722711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:43.373059 containerd[1447]: time="2024-10-09T01:06:43.372736538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:43.373059 containerd[1447]: time="2024-10-09T01:06:43.372814101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:43.373432 containerd[1447]: time="2024-10-09T01:06:43.372904532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:43.373432 containerd[1447]: time="2024-10-09T01:06:43.372947810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:43.373432 containerd[1447]: time="2024-10-09T01:06:43.372958399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:43.373432 containerd[1447]: time="2024-10-09T01:06:43.373013904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:43.373947 containerd[1447]: time="2024-10-09T01:06:43.373790978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:43.375844 containerd[1447]: time="2024-10-09T01:06:43.374053599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:43.375844 containerd[1447]: time="2024-10-09T01:06:43.374076537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:43.375844 containerd[1447]: time="2024-10-09T01:06:43.374156937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:43.399905 systemd[1]: Started cri-containerd-70e6dd1e791104c6cbc7e6ec5cc2ddf8acc3d058780afa6f503a8192a2ed47ec.scope - libcontainer container 70e6dd1e791104c6cbc7e6ec5cc2ddf8acc3d058780afa6f503a8192a2ed47ec. Oct 9 01:06:43.401115 systemd[1]: Started cri-containerd-947ff0bf024140dd92ac3cd7df37a4e61f885457f8b5b4c652a72269aaf700fe.scope - libcontainer container 947ff0bf024140dd92ac3cd7df37a4e61f885457f8b5b4c652a72269aaf700fe. Oct 9 01:06:43.402526 systemd[1]: Started cri-containerd-9b1246409d558ded5d2de29a2ac1ff54a0c073274ba3c16b56a1445c4e69601f.scope - libcontainer container 9b1246409d558ded5d2de29a2ac1ff54a0c073274ba3c16b56a1445c4e69601f. Oct 9 01:06:43.431464 containerd[1447]: time="2024-10-09T01:06:43.431399578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1c90a50c8bbd4577796b947041df7613,Namespace:kube-system,Attempt:0,} returns sandbox id \"947ff0bf024140dd92ac3cd7df37a4e61f885457f8b5b4c652a72269aaf700fe\"" Oct 9 01:06:43.431908 containerd[1447]: time="2024-10-09T01:06:43.431399898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2fcea4df269cc1e6513f9e3e768ded5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"70e6dd1e791104c6cbc7e6ec5cc2ddf8acc3d058780afa6f503a8192a2ed47ec\"" Oct 9 01:06:43.432922 kubelet[2261]: E1009 01:06:43.432839 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:43.433179 kubelet[2261]: E1009 01:06:43.433155 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:43.435972 containerd[1447]: time="2024-10-09T01:06:43.435931909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e5c757a7a09759fc423ca409747c56ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b1246409d558ded5d2de29a2ac1ff54a0c073274ba3c16b56a1445c4e69601f\"" Oct 9 01:06:43.436525 kubelet[2261]: E1009 01:06:43.436393 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:43.437490 containerd[1447]: time="2024-10-09T01:06:43.437463439Z" level=info msg="CreateContainer within sandbox \"947ff0bf024140dd92ac3cd7df37a4e61f885457f8b5b4c652a72269aaf700fe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 01:06:43.437543 containerd[1447]: time="2024-10-09T01:06:43.437487895Z" level=info msg="CreateContainer within sandbox \"70e6dd1e791104c6cbc7e6ec5cc2ddf8acc3d058780afa6f503a8192a2ed47ec\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 01:06:43.438763 containerd[1447]: time="2024-10-09T01:06:43.438650629Z" level=info msg="CreateContainer within sandbox \"9b1246409d558ded5d2de29a2ac1ff54a0c073274ba3c16b56a1445c4e69601f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 01:06:43.454891 containerd[1447]: time="2024-10-09T01:06:43.454855212Z" level=info msg="CreateContainer within sandbox \"947ff0bf024140dd92ac3cd7df37a4e61f885457f8b5b4c652a72269aaf700fe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1539090e40fbd26b892a35c11efbd8756b0e6e0aefd9dbe54065bc5bfd38df11\"" Oct 9 01:06:43.455462 containerd[1447]: time="2024-10-09T01:06:43.455436518Z" level=info msg="StartContainer for \"1539090e40fbd26b892a35c11efbd8756b0e6e0aefd9dbe54065bc5bfd38df11\"" Oct 9 01:06:43.457863 containerd[1447]: time="2024-10-09T01:06:43.457693733Z" level=info msg="CreateContainer within sandbox \"70e6dd1e791104c6cbc7e6ec5cc2ddf8acc3d058780afa6f503a8192a2ed47ec\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"614d8a6b18fe8abfd89163aa5420f31cd1ed6c7efa677350e6690399cfa9fe29\"" Oct 9 01:06:43.458104 containerd[1447]: time="2024-10-09T01:06:43.458074158Z" level=info msg="StartContainer for \"614d8a6b18fe8abfd89163aa5420f31cd1ed6c7efa677350e6690399cfa9fe29\"" Oct 9 01:06:43.458502 containerd[1447]: time="2024-10-09T01:06:43.458471366Z" level=info msg="CreateContainer within sandbox \"9b1246409d558ded5d2de29a2ac1ff54a0c073274ba3c16b56a1445c4e69601f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"46d5aac4335a8799796f424efe30f25206e629ad3e1b68b7e9efe851fb03eea5\"" Oct 9 01:06:43.458936 containerd[1447]: time="2024-10-09T01:06:43.458901222Z" level=info msg="StartContainer for \"46d5aac4335a8799796f424efe30f25206e629ad3e1b68b7e9efe851fb03eea5\"" Oct 9 01:06:43.470306 kubelet[2261]: W1009 01:06:43.469406 2261 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:43.470462 kubelet[2261]: E1009 01:06:43.470409 2261 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:43.481910 systemd[1]: Started cri-containerd-1539090e40fbd26b892a35c11efbd8756b0e6e0aefd9dbe54065bc5bfd38df11.scope - libcontainer container 1539090e40fbd26b892a35c11efbd8756b0e6e0aefd9dbe54065bc5bfd38df11. Oct 9 01:06:43.482223 kubelet[2261]: W1009 01:06:43.482148 2261 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:43.482223 kubelet[2261]: E1009 01:06:43.482205 2261 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Oct 9 01:06:43.485276 systemd[1]: Started cri-containerd-46d5aac4335a8799796f424efe30f25206e629ad3e1b68b7e9efe851fb03eea5.scope - libcontainer container 46d5aac4335a8799796f424efe30f25206e629ad3e1b68b7e9efe851fb03eea5. Oct 9 01:06:43.486083 systemd[1]: Started cri-containerd-614d8a6b18fe8abfd89163aa5420f31cd1ed6c7efa677350e6690399cfa9fe29.scope - libcontainer container 614d8a6b18fe8abfd89163aa5420f31cd1ed6c7efa677350e6690399cfa9fe29. Oct 9 01:06:43.535646 kubelet[2261]: E1009 01:06:43.534206 2261 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="1.6s" Oct 9 01:06:43.551200 containerd[1447]: time="2024-10-09T01:06:43.551006090Z" level=info msg="StartContainer for \"1539090e40fbd26b892a35c11efbd8756b0e6e0aefd9dbe54065bc5bfd38df11\" returns successfully" Oct 9 01:06:43.551200 containerd[1447]: time="2024-10-09T01:06:43.551164894Z" level=info msg="StartContainer for \"46d5aac4335a8799796f424efe30f25206e629ad3e1b68b7e9efe851fb03eea5\" returns successfully" Oct 9 01:06:43.551825 containerd[1447]: time="2024-10-09T01:06:43.551770097Z" level=info msg="StartContainer for \"614d8a6b18fe8abfd89163aa5420f31cd1ed6c7efa677350e6690399cfa9fe29\" returns successfully" Oct 9 01:06:43.637022 kubelet[2261]: I1009 01:06:43.636900 2261 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:06:43.637514 kubelet[2261]: E1009 01:06:43.637461 2261 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Oct 9 01:06:44.154151 kubelet[2261]: E1009 01:06:44.154068 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:44.157529 kubelet[2261]: E1009 01:06:44.156962 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:44.159493 kubelet[2261]: E1009 01:06:44.159477 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:45.117027 kubelet[2261]: I1009 01:06:45.116999 2261 apiserver.go:52] "Watching apiserver" Oct 9 01:06:45.130833 kubelet[2261]: I1009 01:06:45.130802 2261 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 01:06:45.142731 kubelet[2261]: E1009 01:06:45.142681 2261 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 01:06:45.161350 kubelet[2261]: E1009 01:06:45.161263 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:45.239216 kubelet[2261]: I1009 01:06:45.239174 2261 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:06:45.244308 kubelet[2261]: I1009 01:06:45.244250 2261 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 01:06:46.931780 kubelet[2261]: E1009 01:06:46.931701 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:47.058237 systemd[1]: Reloading requested from client PID 2542 ('systemctl') (unit session-7.scope)... Oct 9 01:06:47.058253 systemd[1]: Reloading... Oct 9 01:06:47.118781 zram_generator::config[2581]: No configuration found. Oct 9 01:06:47.162704 kubelet[2261]: E1009 01:06:47.162666 2261 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:47.200986 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:06:47.262852 systemd[1]: Reloading finished in 204 ms. Oct 9 01:06:47.298981 kubelet[2261]: I1009 01:06:47.298917 2261 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:06:47.299093 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:47.315688 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:06:47.316579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:47.326965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:47.411435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:47.415714 (kubelet)[2623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:06:47.453465 kubelet[2623]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:06:47.453465 kubelet[2623]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:06:47.453465 kubelet[2623]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:06:47.453465 kubelet[2623]: I1009 01:06:47.453198 2623 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:06:47.457789 kubelet[2623]: I1009 01:06:47.457761 2623 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 01:06:47.457789 kubelet[2623]: I1009 01:06:47.457785 2623 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:06:47.458018 kubelet[2623]: I1009 01:06:47.457973 2623 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 01:06:47.459355 kubelet[2623]: I1009 01:06:47.459331 2623 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 01:06:47.460650 kubelet[2623]: I1009 01:06:47.460629 2623 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:06:47.466377 kubelet[2623]: I1009 01:06:47.466349 2623 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:06:47.466573 kubelet[2623]: I1009 01:06:47.466539 2623 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:06:47.466719 kubelet[2623]: I1009 01:06:47.466565 2623 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:06:47.466719 kubelet[2623]: I1009 01:06:47.466718 2623 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:06:47.466829 kubelet[2623]: I1009 01:06:47.466727 2623 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:06:47.466829 kubelet[2623]: I1009 01:06:47.466773 2623 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:06:47.466898 kubelet[2623]: I1009 01:06:47.466865 2623 kubelet.go:400] "Attempting to sync node with API server" Oct 9 01:06:47.466898 kubelet[2623]: I1009 01:06:47.466877 2623 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:06:47.466898 kubelet[2623]: I1009 01:06:47.466898 2623 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:06:47.469847 kubelet[2623]: I1009 01:06:47.466910 2623 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:06:47.470865 kubelet[2623]: I1009 01:06:47.470833 2623 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:06:47.471023 kubelet[2623]: I1009 01:06:47.471004 2623 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:06:47.472021 kubelet[2623]: I1009 01:06:47.471995 2623 server.go:1264] "Started kubelet" Oct 9 01:06:47.472681 kubelet[2623]: I1009 01:06:47.472633 2623 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:06:47.472782 kubelet[2623]: I1009 01:06:47.472754 2623 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:06:47.473560 kubelet[2623]: I1009 01:06:47.473319 2623 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:06:47.473654 kubelet[2623]: I1009 01:06:47.472884 2623 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:06:47.474023 kubelet[2623]: I1009 01:06:47.474001 2623 server.go:455] "Adding debug handlers to kubelet server" Oct 9 01:06:47.475302 kubelet[2623]: I1009 01:06:47.475282 2623 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:06:47.475531 kubelet[2623]: I1009 01:06:47.475503 2623 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 01:06:47.475718 kubelet[2623]: I1009 01:06:47.475705 2623 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:06:47.482910 kubelet[2623]: I1009 01:06:47.482880 2623 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:06:47.482997 kubelet[2623]: I1009 01:06:47.482976 2623 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:06:47.484449 kubelet[2623]: E1009 01:06:47.483296 2623 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:06:47.489388 kubelet[2623]: I1009 01:06:47.489271 2623 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:06:47.503915 kubelet[2623]: I1009 01:06:47.503871 2623 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:06:47.505395 kubelet[2623]: I1009 01:06:47.505359 2623 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:06:47.505795 kubelet[2623]: I1009 01:06:47.505583 2623 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:06:47.505795 kubelet[2623]: I1009 01:06:47.505613 2623 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 01:06:47.505795 kubelet[2623]: E1009 01:06:47.505721 2623 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:06:47.523730 kubelet[2623]: I1009 01:06:47.523704 2623 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:06:47.523730 kubelet[2623]: I1009 01:06:47.523720 2623 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:06:47.523862 kubelet[2623]: I1009 01:06:47.523738 2623 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:06:47.523933 kubelet[2623]: I1009 01:06:47.523913 2623 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 01:06:47.523964 kubelet[2623]: I1009 01:06:47.523932 2623 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 01:06:47.523964 kubelet[2623]: I1009 01:06:47.523949 2623 policy_none.go:49] "None policy: Start" Oct 9 01:06:47.524534 kubelet[2623]: I1009 01:06:47.524517 2623 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:06:47.524534 kubelet[2623]: I1009 01:06:47.524538 2623 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:06:47.524672 kubelet[2623]: I1009 01:06:47.524657 2623 state_mem.go:75] "Updated machine memory state" Oct 9 01:06:47.528048 kubelet[2623]: I1009 01:06:47.528020 2623 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:06:47.528218 kubelet[2623]: I1009 01:06:47.528176 2623 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:06:47.528347 kubelet[2623]: I1009 01:06:47.528294 2623 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:06:47.579226 kubelet[2623]: I1009 01:06:47.579187 2623 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:06:47.585265 kubelet[2623]: I1009 01:06:47.585212 2623 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 9 01:06:47.585356 kubelet[2623]: I1009 01:06:47.585312 2623 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 01:06:47.607082 kubelet[2623]: I1009 01:06:47.607043 2623 topology_manager.go:215] "Topology Admit Handler" podUID="1c90a50c8bbd4577796b947041df7613" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 01:06:47.607245 kubelet[2623]: I1009 01:06:47.607157 2623 topology_manager.go:215] "Topology Admit Handler" podUID="e5c757a7a09759fc423ca409747c56ae" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 01:06:47.607245 kubelet[2623]: I1009 01:06:47.607193 2623 topology_manager.go:215] "Topology Admit Handler" podUID="2fcea4df269cc1e6513f9e3e768ded5a" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 01:06:47.612490 kubelet[2623]: E1009 01:06:47.612439 2623 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:47.777294 kubelet[2623]: I1009 01:06:47.777176 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c90a50c8bbd4577796b947041df7613-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1c90a50c8bbd4577796b947041df7613\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:47.777294 kubelet[2623]: I1009 01:06:47.777215 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c90a50c8bbd4577796b947041df7613-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1c90a50c8bbd4577796b947041df7613\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:47.777294 kubelet[2623]: I1009 01:06:47.777235 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:47.777294 kubelet[2623]: I1009 01:06:47.777250 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:47.777294 kubelet[2623]: I1009 01:06:47.777267 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:47.777501 kubelet[2623]: I1009 01:06:47.777283 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c90a50c8bbd4577796b947041df7613-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1c90a50c8bbd4577796b947041df7613\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:47.777501 kubelet[2623]: I1009 01:06:47.777297 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:47.777501 kubelet[2623]: I1009 01:06:47.777311 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c757a7a09759fc423ca409747c56ae-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e5c757a7a09759fc423ca409747c56ae\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:47.777501 kubelet[2623]: I1009 01:06:47.777327 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fcea4df269cc1e6513f9e3e768ded5a-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2fcea4df269cc1e6513f9e3e768ded5a\") " pod="kube-system/kube-scheduler-localhost" Oct 9 01:06:47.914337 kubelet[2623]: E1009 01:06:47.914304 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:47.914337 kubelet[2623]: E1009 01:06:47.914325 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:47.914337 kubelet[2623]: E1009 01:06:47.914369 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:48.467891 kubelet[2623]: I1009 01:06:48.467813 2623 apiserver.go:52] "Watching apiserver" Oct 9 01:06:48.476406 kubelet[2623]: I1009 01:06:48.476353 2623 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 01:06:48.519018 kubelet[2623]: E1009 01:06:48.518966 2623 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 9 01:06:48.519669 kubelet[2623]: E1009 01:06:48.519350 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:48.519834 kubelet[2623]: E1009 01:06:48.519726 2623 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 01:06:48.519834 kubelet[2623]: E1009 01:06:48.519810 2623 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 9 01:06:48.520112 kubelet[2623]: E1009 01:06:48.520027 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:48.520709 kubelet[2623]: E1009 01:06:48.520675 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:48.586063 kubelet[2623]: I1009 01:06:48.585995 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.585978023 podStartE2EDuration="1.585978023s" podCreationTimestamp="2024-10-09 01:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:06:48.540463169 +0000 UTC m=+1.121980974" watchObservedRunningTime="2024-10-09 01:06:48.585978023 +0000 UTC m=+1.167495828" Oct 9 01:06:48.609516 kubelet[2623]: I1009 01:06:48.609114 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.60908368 podStartE2EDuration="2.60908368s" podCreationTimestamp="2024-10-09 01:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:06:48.609035904 +0000 UTC m=+1.190553709" watchObservedRunningTime="2024-10-09 01:06:48.60908368 +0000 UTC m=+1.190601485" Oct 9 01:06:48.609516 kubelet[2623]: I1009 01:06:48.609202 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.609197185 podStartE2EDuration="1.609197185s" podCreationTimestamp="2024-10-09 01:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:06:48.586109879 +0000 UTC m=+1.167627684" watchObservedRunningTime="2024-10-09 01:06:48.609197185 +0000 UTC m=+1.190714990" Oct 9 01:06:49.515769 kubelet[2623]: E1009 01:06:49.515311 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:49.521788 kubelet[2623]: E1009 01:06:49.521758 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:49.521897 kubelet[2623]: E1009 01:06:49.521871 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:51.981805 sudo[1630]: pam_unix(sudo:session): session closed for user root Oct 9 01:06:51.983427 sshd[1627]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:51.987820 systemd[1]: sshd@6-10.0.0.142:22-10.0.0.1:35424.service: Deactivated successfully. Oct 9 01:06:51.989337 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 01:06:51.990082 systemd[1]: session-7.scope: Consumed 7.489s CPU time, 194.1M memory peak, 0B memory swap peak. Oct 9 01:06:51.990520 systemd-logind[1432]: Session 7 logged out. Waiting for processes to exit. Oct 9 01:06:51.991439 systemd-logind[1432]: Removed session 7. Oct 9 01:06:53.121345 kubelet[2623]: E1009 01:06:53.121311 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:53.520998 kubelet[2623]: E1009 01:06:53.520870 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:53.682833 kubelet[2623]: E1009 01:06:53.682802 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:54.524281 kubelet[2623]: E1009 01:06:54.524212 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:57.780583 kubelet[2623]: E1009 01:06:57.780552 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:06:58.529416 kubelet[2623]: E1009 01:06:58.529377 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:01.156891 update_engine[1436]: I20241009 01:07:01.156805 1436 update_attempter.cc:509] Updating boot flags... Oct 9 01:07:01.174782 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2720) Oct 9 01:07:01.212912 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2724) Oct 9 01:07:01.233777 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2724) Oct 9 01:07:03.774202 kubelet[2623]: I1009 01:07:03.774120 2623 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 01:07:03.788838 containerd[1447]: time="2024-10-09T01:07:03.788778009Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 01:07:03.789163 kubelet[2623]: I1009 01:07:03.789084 2623 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 01:07:04.675658 kubelet[2623]: I1009 01:07:04.675589 2623 topology_manager.go:215] "Topology Admit Handler" podUID="071e1b23-dced-470b-9e5d-913b0563d9ff" podNamespace="kube-system" podName="kube-proxy-526wf" Oct 9 01:07:04.683862 systemd[1]: Created slice kubepods-besteffort-pod071e1b23_dced_470b_9e5d_913b0563d9ff.slice - libcontainer container kubepods-besteffort-pod071e1b23_dced_470b_9e5d_913b0563d9ff.slice. Oct 9 01:07:04.708767 kubelet[2623]: I1009 01:07:04.708717 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/071e1b23-dced-470b-9e5d-913b0563d9ff-kube-proxy\") pod \"kube-proxy-526wf\" (UID: \"071e1b23-dced-470b-9e5d-913b0563d9ff\") " pod="kube-system/kube-proxy-526wf" Oct 9 01:07:04.708890 kubelet[2623]: I1009 01:07:04.708778 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/071e1b23-dced-470b-9e5d-913b0563d9ff-xtables-lock\") pod \"kube-proxy-526wf\" (UID: \"071e1b23-dced-470b-9e5d-913b0563d9ff\") " pod="kube-system/kube-proxy-526wf" Oct 9 01:07:04.708890 kubelet[2623]: I1009 01:07:04.708800 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8kqk\" (UniqueName: \"kubernetes.io/projected/071e1b23-dced-470b-9e5d-913b0563d9ff-kube-api-access-d8kqk\") pod \"kube-proxy-526wf\" (UID: \"071e1b23-dced-470b-9e5d-913b0563d9ff\") " pod="kube-system/kube-proxy-526wf" Oct 9 01:07:04.708890 kubelet[2623]: I1009 01:07:04.708821 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/071e1b23-dced-470b-9e5d-913b0563d9ff-lib-modules\") pod \"kube-proxy-526wf\" (UID: \"071e1b23-dced-470b-9e5d-913b0563d9ff\") " pod="kube-system/kube-proxy-526wf" Oct 9 01:07:04.885789 kubelet[2623]: I1009 01:07:04.885735 2623 topology_manager.go:215] "Topology Admit Handler" podUID="dcf8a434-5588-4bb6-8608-1479c8b994b7" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-24w24" Oct 9 01:07:04.893035 systemd[1]: Created slice kubepods-besteffort-poddcf8a434_5588_4bb6_8608_1479c8b994b7.slice - libcontainer container kubepods-besteffort-poddcf8a434_5588_4bb6_8608_1479c8b994b7.slice. Oct 9 01:07:04.910306 kubelet[2623]: I1009 01:07:04.910211 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtw6n\" (UniqueName: \"kubernetes.io/projected/dcf8a434-5588-4bb6-8608-1479c8b994b7-kube-api-access-xtw6n\") pod \"tigera-operator-77f994b5bb-24w24\" (UID: \"dcf8a434-5588-4bb6-8608-1479c8b994b7\") " pod="tigera-operator/tigera-operator-77f994b5bb-24w24" Oct 9 01:07:04.910306 kubelet[2623]: I1009 01:07:04.910252 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dcf8a434-5588-4bb6-8608-1479c8b994b7-var-lib-calico\") pod \"tigera-operator-77f994b5bb-24w24\" (UID: \"dcf8a434-5588-4bb6-8608-1479c8b994b7\") " pod="tigera-operator/tigera-operator-77f994b5bb-24w24" Oct 9 01:07:04.993036 kubelet[2623]: E1009 01:07:04.992829 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:04.997165 containerd[1447]: time="2024-10-09T01:07:04.997096583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-526wf,Uid:071e1b23-dced-470b-9e5d-913b0563d9ff,Namespace:kube-system,Attempt:0,}" Oct 9 01:07:05.016286 containerd[1447]: time="2024-10-09T01:07:05.015541354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:05.016286 containerd[1447]: time="2024-10-09T01:07:05.016052189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:05.016286 containerd[1447]: time="2024-10-09T01:07:05.016065430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:05.016528 containerd[1447]: time="2024-10-09T01:07:05.016266444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:05.040923 systemd[1]: Started cri-containerd-df65577770ea2a532d3976582507015cf63b50fbebe57e52ce58e1ce3edc4377.scope - libcontainer container df65577770ea2a532d3976582507015cf63b50fbebe57e52ce58e1ce3edc4377. Oct 9 01:07:05.058578 containerd[1447]: time="2024-10-09T01:07:05.058505770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-526wf,Uid:071e1b23-dced-470b-9e5d-913b0563d9ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"df65577770ea2a532d3976582507015cf63b50fbebe57e52ce58e1ce3edc4377\"" Oct 9 01:07:05.060895 kubelet[2623]: E1009 01:07:05.060876 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:05.066043 containerd[1447]: time="2024-10-09T01:07:05.065856880Z" level=info msg="CreateContainer within sandbox \"df65577770ea2a532d3976582507015cf63b50fbebe57e52ce58e1ce3edc4377\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 01:07:05.087418 containerd[1447]: time="2024-10-09T01:07:05.087366770Z" level=info msg="CreateContainer within sandbox \"df65577770ea2a532d3976582507015cf63b50fbebe57e52ce58e1ce3edc4377\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b6cc6dfea7bb2c4dedfc4e8eba2cf73fc795d301c899cec766418301bbb862d\"" Oct 9 01:07:05.090059 containerd[1447]: time="2024-10-09T01:07:05.090016753Z" level=info msg="StartContainer for \"0b6cc6dfea7bb2c4dedfc4e8eba2cf73fc795d301c899cec766418301bbb862d\"" Oct 9 01:07:05.119935 systemd[1]: Started cri-containerd-0b6cc6dfea7bb2c4dedfc4e8eba2cf73fc795d301c899cec766418301bbb862d.scope - libcontainer container 0b6cc6dfea7bb2c4dedfc4e8eba2cf73fc795d301c899cec766418301bbb862d. Oct 9 01:07:05.143412 containerd[1447]: time="2024-10-09T01:07:05.143352608Z" level=info msg="StartContainer for \"0b6cc6dfea7bb2c4dedfc4e8eba2cf73fc795d301c899cec766418301bbb862d\" returns successfully" Oct 9 01:07:05.198659 containerd[1447]: time="2024-10-09T01:07:05.198612197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-24w24,Uid:dcf8a434-5588-4bb6-8608-1479c8b994b7,Namespace:tigera-operator,Attempt:0,}" Oct 9 01:07:05.229796 containerd[1447]: time="2024-10-09T01:07:05.228867493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:05.229796 containerd[1447]: time="2024-10-09T01:07:05.229302403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:05.229796 containerd[1447]: time="2024-10-09T01:07:05.229316164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:05.229796 containerd[1447]: time="2024-10-09T01:07:05.229384649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:05.248937 systemd[1]: Started cri-containerd-e07510da6dfe1c5b66535f7745c066f59f3e008fb9fa756fd679b4f9be24c542.scope - libcontainer container e07510da6dfe1c5b66535f7745c066f59f3e008fb9fa756fd679b4f9be24c542. Oct 9 01:07:05.279440 containerd[1447]: time="2024-10-09T01:07:05.279334909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-24w24,Uid:dcf8a434-5588-4bb6-8608-1479c8b994b7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e07510da6dfe1c5b66535f7745c066f59f3e008fb9fa756fd679b4f9be24c542\"" Oct 9 01:07:05.281307 containerd[1447]: time="2024-10-09T01:07:05.281244401Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 01:07:05.543650 kubelet[2623]: E1009 01:07:05.542853 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:06.253644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270740183.mount: Deactivated successfully. Oct 9 01:07:06.865517 containerd[1447]: time="2024-10-09T01:07:06.865469506Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:06.866362 containerd[1447]: time="2024-10-09T01:07:06.866011102Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485883" Oct 9 01:07:06.868368 containerd[1447]: time="2024-10-09T01:07:06.866932323Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:06.869815 containerd[1447]: time="2024-10-09T01:07:06.869786911Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:06.871390 containerd[1447]: time="2024-10-09T01:07:06.871362335Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 1.590081691s" Oct 9 01:07:06.871495 containerd[1447]: time="2024-10-09T01:07:06.871480343Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 9 01:07:06.877284 containerd[1447]: time="2024-10-09T01:07:06.877256605Z" level=info msg="CreateContainer within sandbox \"e07510da6dfe1c5b66535f7745c066f59f3e008fb9fa756fd679b4f9be24c542\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 01:07:06.892515 containerd[1447]: time="2024-10-09T01:07:06.892468290Z" level=info msg="CreateContainer within sandbox \"e07510da6dfe1c5b66535f7745c066f59f3e008fb9fa756fd679b4f9be24c542\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ca6d308a5b45891f69ab1a75bd74c11f99dbc622a879fcc694e794082f27cae1\"" Oct 9 01:07:06.894616 containerd[1447]: time="2024-10-09T01:07:06.893406312Z" level=info msg="StartContainer for \"ca6d308a5b45891f69ab1a75bd74c11f99dbc622a879fcc694e794082f27cae1\"" Oct 9 01:07:06.929923 systemd[1]: Started cri-containerd-ca6d308a5b45891f69ab1a75bd74c11f99dbc622a879fcc694e794082f27cae1.scope - libcontainer container ca6d308a5b45891f69ab1a75bd74c11f99dbc622a879fcc694e794082f27cae1. Oct 9 01:07:06.969507 containerd[1447]: time="2024-10-09T01:07:06.969465818Z" level=info msg="StartContainer for \"ca6d308a5b45891f69ab1a75bd74c11f99dbc622a879fcc694e794082f27cae1\" returns successfully" Oct 9 01:07:07.544484 kubelet[2623]: I1009 01:07:07.544196 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-526wf" podStartSLOduration=3.544179999 podStartE2EDuration="3.544179999s" podCreationTimestamp="2024-10-09 01:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:07:05.550403249 +0000 UTC m=+18.131921054" watchObservedRunningTime="2024-10-09 01:07:07.544179999 +0000 UTC m=+20.125697804" Oct 9 01:07:07.582266 kubelet[2623]: I1009 01:07:07.582115 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-24w24" podStartSLOduration=1.986847155 podStartE2EDuration="3.58209815s" podCreationTimestamp="2024-10-09 01:07:04 +0000 UTC" firstStartedPulling="2024-10-09 01:07:05.280704604 +0000 UTC m=+17.862222409" lastFinishedPulling="2024-10-09 01:07:06.875955599 +0000 UTC m=+19.457473404" observedRunningTime="2024-10-09 01:07:07.582082149 +0000 UTC m=+20.163599954" watchObservedRunningTime="2024-10-09 01:07:07.58209815 +0000 UTC m=+20.163615955" Oct 9 01:07:11.525266 kubelet[2623]: I1009 01:07:11.524842 2623 topology_manager.go:215] "Topology Admit Handler" podUID="8c853a3e-dcf4-4311-b7d0-caa109780dcc" podNamespace="calico-system" podName="calico-typha-747ddf789f-tkmtj" Oct 9 01:07:11.536108 systemd[1]: Created slice kubepods-besteffort-pod8c853a3e_dcf4_4311_b7d0_caa109780dcc.slice - libcontainer container kubepods-besteffort-pod8c853a3e_dcf4_4311_b7d0_caa109780dcc.slice. Oct 9 01:07:11.558032 kubelet[2623]: I1009 01:07:11.557996 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8c853a3e-dcf4-4311-b7d0-caa109780dcc-typha-certs\") pod \"calico-typha-747ddf789f-tkmtj\" (UID: \"8c853a3e-dcf4-4311-b7d0-caa109780dcc\") " pod="calico-system/calico-typha-747ddf789f-tkmtj" Oct 9 01:07:11.558142 kubelet[2623]: I1009 01:07:11.558058 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c853a3e-dcf4-4311-b7d0-caa109780dcc-tigera-ca-bundle\") pod \"calico-typha-747ddf789f-tkmtj\" (UID: \"8c853a3e-dcf4-4311-b7d0-caa109780dcc\") " pod="calico-system/calico-typha-747ddf789f-tkmtj" Oct 9 01:07:11.558142 kubelet[2623]: I1009 01:07:11.558082 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jm89\" (UniqueName: \"kubernetes.io/projected/8c853a3e-dcf4-4311-b7d0-caa109780dcc-kube-api-access-6jm89\") pod \"calico-typha-747ddf789f-tkmtj\" (UID: \"8c853a3e-dcf4-4311-b7d0-caa109780dcc\") " pod="calico-system/calico-typha-747ddf789f-tkmtj" Oct 9 01:07:11.570733 kubelet[2623]: I1009 01:07:11.570587 2623 topology_manager.go:215] "Topology Admit Handler" podUID="17f18265-8b74-4e7a-91ae-c90ca2d431b8" podNamespace="calico-system" podName="calico-node-mmclm" Oct 9 01:07:11.581339 systemd[1]: Created slice kubepods-besteffort-pod17f18265_8b74_4e7a_91ae_c90ca2d431b8.slice - libcontainer container kubepods-besteffort-pod17f18265_8b74_4e7a_91ae_c90ca2d431b8.slice. Oct 9 01:07:11.659060 kubelet[2623]: I1009 01:07:11.659025 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-lib-modules\") pod \"calico-node-mmclm\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " pod="calico-system/calico-node-mmclm" Oct 9 01:07:11.659060 kubelet[2623]: I1009 01:07:11.659061 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-cni-bin-dir\") pod \"calico-node-mmclm\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " pod="calico-system/calico-node-mmclm" Oct 9 01:07:11.659217 kubelet[2623]: I1009 01:07:11.659083 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-cni-log-dir\") pod \"calico-node-mmclm\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " pod="calico-system/calico-node-mmclm" Oct 9 01:07:11.659217 kubelet[2623]: I1009 01:07:11.659099 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17f18265-8b74-4e7a-91ae-c90ca2d431b8-tigera-ca-bundle\") pod \"calico-node-mmclm\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " pod="calico-system/calico-node-mmclm" Oct 9 01:07:11.659217 kubelet[2623]: I1009 01:07:11.659115 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-var-run-calico\") pod \"calico-node-mmclm\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " pod="calico-system/calico-node-mmclm" Oct 9 01:07:11.659817 kubelet[2623]: I1009 01:07:11.659787 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-var-lib-calico\") pod \"calico-node-mmclm\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " pod="calico-system/calico-node-mmclm" Oct 9 01:07:11.659852 kubelet[2623]: I1009 01:07:11.659838 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-xtables-lock\") pod \"calico-node-mmclm\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " pod="calico-system/calico-node-mmclm" Oct 9 01:07:11.659883 kubelet[2623]: I1009 01:07:11.659857 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/17f18265-8b74-4e7a-91ae-c90ca2d431b8-node-certs\") pod \"calico-node-mmclm\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " pod="calico-system/calico-node-mmclm" Oct 9 01:07:11.659908 kubelet[2623]: I1009 01:07:11.659876 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-cni-net-dir\") pod \"calico-node-mmclm\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " pod="calico-system/calico-node-mmclm" Oct 9 01:07:11.660002 kubelet[2623]: I1009 01:07:11.659967 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-policysync\") pod \"calico-node-mmclm\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " pod="calico-system/calico-node-mmclm" Oct 9 01:07:11.660682 kubelet[2623]: I1009 01:07:11.660316 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-flexvol-driver-host\") pod \"calico-node-mmclm\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " pod="calico-system/calico-node-mmclm" Oct 9 01:07:11.660682 kubelet[2623]: I1009 01:07:11.660344 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwnmf\" (UniqueName: \"kubernetes.io/projected/17f18265-8b74-4e7a-91ae-c90ca2d431b8-kube-api-access-kwnmf\") pod \"calico-node-mmclm\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " pod="calico-system/calico-node-mmclm" Oct 9 01:07:11.686045 kubelet[2623]: I1009 01:07:11.684237 2623 topology_manager.go:215] "Topology Admit Handler" podUID="72a149ef-a469-42aa-b8b7-4b018e2ec3a1" podNamespace="calico-system" podName="csi-node-driver-t6frj" Oct 9 01:07:11.686045 kubelet[2623]: E1009 01:07:11.684498 2623 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6frj" podUID="72a149ef-a469-42aa-b8b7-4b018e2ec3a1" Oct 9 01:07:11.761881 kubelet[2623]: I1009 01:07:11.761334 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/72a149ef-a469-42aa-b8b7-4b018e2ec3a1-varrun\") pod \"csi-node-driver-t6frj\" (UID: \"72a149ef-a469-42aa-b8b7-4b018e2ec3a1\") " pod="calico-system/csi-node-driver-t6frj" Oct 9 01:07:11.762618 kubelet[2623]: I1009 01:07:11.762090 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfshp\" (UniqueName: \"kubernetes.io/projected/72a149ef-a469-42aa-b8b7-4b018e2ec3a1-kube-api-access-lfshp\") pod \"csi-node-driver-t6frj\" (UID: \"72a149ef-a469-42aa-b8b7-4b018e2ec3a1\") " pod="calico-system/csi-node-driver-t6frj" Oct 9 01:07:11.762618 kubelet[2623]: I1009 01:07:11.762146 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72a149ef-a469-42aa-b8b7-4b018e2ec3a1-kubelet-dir\") pod \"csi-node-driver-t6frj\" (UID: \"72a149ef-a469-42aa-b8b7-4b018e2ec3a1\") " pod="calico-system/csi-node-driver-t6frj" Oct 9 01:07:11.762618 kubelet[2623]: I1009 01:07:11.762165 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/72a149ef-a469-42aa-b8b7-4b018e2ec3a1-socket-dir\") pod \"csi-node-driver-t6frj\" (UID: \"72a149ef-a469-42aa-b8b7-4b018e2ec3a1\") " pod="calico-system/csi-node-driver-t6frj" Oct 9 01:07:11.762618 kubelet[2623]: I1009 01:07:11.762193 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/72a149ef-a469-42aa-b8b7-4b018e2ec3a1-registration-dir\") pod \"csi-node-driver-t6frj\" (UID: \"72a149ef-a469-42aa-b8b7-4b018e2ec3a1\") " pod="calico-system/csi-node-driver-t6frj" Oct 9 01:07:11.763381 kubelet[2623]: E1009 01:07:11.763356 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.763484 kubelet[2623]: W1009 01:07:11.763377 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.763522 kubelet[2623]: E1009 01:07:11.763485 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.763832 kubelet[2623]: E1009 01:07:11.763818 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.763832 kubelet[2623]: W1009 01:07:11.763831 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.763943 kubelet[2623]: E1009 01:07:11.763929 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.764235 kubelet[2623]: E1009 01:07:11.764220 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.764235 kubelet[2623]: W1009 01:07:11.764234 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.764302 kubelet[2623]: E1009 01:07:11.764275 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.764554 kubelet[2623]: E1009 01:07:11.764540 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.764554 kubelet[2623]: W1009 01:07:11.764552 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.764614 kubelet[2623]: E1009 01:07:11.764590 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.764934 kubelet[2623]: E1009 01:07:11.764917 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.764934 kubelet[2623]: W1009 01:07:11.764929 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.765249 kubelet[2623]: E1009 01:07:11.765174 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.765409 kubelet[2623]: E1009 01:07:11.765393 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.765452 kubelet[2623]: W1009 01:07:11.765408 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.765519 kubelet[2623]: E1009 01:07:11.765497 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.765776 kubelet[2623]: E1009 01:07:11.765754 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.765776 kubelet[2623]: W1009 01:07:11.765774 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.766125 kubelet[2623]: E1009 01:07:11.765830 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.767301 kubelet[2623]: E1009 01:07:11.767276 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.767301 kubelet[2623]: W1009 01:07:11.767294 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.767502 kubelet[2623]: E1009 01:07:11.767455 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.767801 kubelet[2623]: E1009 01:07:11.767785 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.767801 kubelet[2623]: W1009 01:07:11.767800 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.767872 kubelet[2623]: E1009 01:07:11.767814 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.768142 kubelet[2623]: E1009 01:07:11.768127 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.768142 kubelet[2623]: W1009 01:07:11.768141 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.768202 kubelet[2623]: E1009 01:07:11.768167 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.768390 kubelet[2623]: E1009 01:07:11.768378 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.768445 kubelet[2623]: W1009 01:07:11.768390 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.768445 kubelet[2623]: E1009 01:07:11.768400 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.768801 kubelet[2623]: E1009 01:07:11.768786 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.768801 kubelet[2623]: W1009 01:07:11.768801 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.768927 kubelet[2623]: E1009 01:07:11.768898 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.768977 kubelet[2623]: E1009 01:07:11.768947 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.768977 kubelet[2623]: W1009 01:07:11.768955 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.769079 kubelet[2623]: E1009 01:07:11.768997 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.769229 kubelet[2623]: E1009 01:07:11.769217 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.769229 kubelet[2623]: W1009 01:07:11.769228 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.769291 kubelet[2623]: E1009 01:07:11.769238 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.780477 kubelet[2623]: E1009 01:07:11.780225 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.780477 kubelet[2623]: W1009 01:07:11.780240 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.780477 kubelet[2623]: E1009 01:07:11.780252 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.838209 kubelet[2623]: E1009 01:07:11.838175 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:11.838734 containerd[1447]: time="2024-10-09T01:07:11.838696538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-747ddf789f-tkmtj,Uid:8c853a3e-dcf4-4311-b7d0-caa109780dcc,Namespace:calico-system,Attempt:0,}" Oct 9 01:07:11.856881 containerd[1447]: time="2024-10-09T01:07:11.856706249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:11.856881 containerd[1447]: time="2024-10-09T01:07:11.856796294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:11.856881 containerd[1447]: time="2024-10-09T01:07:11.856813575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:11.857044 containerd[1447]: time="2024-10-09T01:07:11.857013946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:11.862725 kubelet[2623]: E1009 01:07:11.862701 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.862725 kubelet[2623]: W1009 01:07:11.862723 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.862863 kubelet[2623]: E1009 01:07:11.862751 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.863026 kubelet[2623]: E1009 01:07:11.863011 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.863066 kubelet[2623]: W1009 01:07:11.863026 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.863066 kubelet[2623]: E1009 01:07:11.863055 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.863470 kubelet[2623]: E1009 01:07:11.863447 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.863470 kubelet[2623]: W1009 01:07:11.863469 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.863550 kubelet[2623]: E1009 01:07:11.863487 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.863725 kubelet[2623]: E1009 01:07:11.863709 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.863725 kubelet[2623]: W1009 01:07:11.863721 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.863888 kubelet[2623]: E1009 01:07:11.863753 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.864006 kubelet[2623]: E1009 01:07:11.863986 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.864006 kubelet[2623]: W1009 01:07:11.864001 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.864203 kubelet[2623]: E1009 01:07:11.864019 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.864203 kubelet[2623]: E1009 01:07:11.864181 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.864203 kubelet[2623]: W1009 01:07:11.864188 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.864203 kubelet[2623]: E1009 01:07:11.864200 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.864355 kubelet[2623]: E1009 01:07:11.864337 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.864355 kubelet[2623]: W1009 01:07:11.864346 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.864355 kubelet[2623]: E1009 01:07:11.864380 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.864531 kubelet[2623]: E1009 01:07:11.864497 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.864531 kubelet[2623]: W1009 01:07:11.864504 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.864717 kubelet[2623]: E1009 01:07:11.864533 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.864717 kubelet[2623]: E1009 01:07:11.864708 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.864717 kubelet[2623]: W1009 01:07:11.864717 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.864815 kubelet[2623]: E1009 01:07:11.864732 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.864939 kubelet[2623]: E1009 01:07:11.864921 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.864939 kubelet[2623]: W1009 01:07:11.864933 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.865040 kubelet[2623]: E1009 01:07:11.864960 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.865155 kubelet[2623]: E1009 01:07:11.865132 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.865195 kubelet[2623]: W1009 01:07:11.865144 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.865284 kubelet[2623]: E1009 01:07:11.865265 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.865531 kubelet[2623]: E1009 01:07:11.865510 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.865531 kubelet[2623]: W1009 01:07:11.865520 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.865659 kubelet[2623]: E1009 01:07:11.865551 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.866065 kubelet[2623]: E1009 01:07:11.865837 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.866065 kubelet[2623]: W1009 01:07:11.865847 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.866065 kubelet[2623]: E1009 01:07:11.865876 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.866065 kubelet[2623]: E1009 01:07:11.866006 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.866065 kubelet[2623]: W1009 01:07:11.866014 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.866065 kubelet[2623]: E1009 01:07:11.866034 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.866231 kubelet[2623]: E1009 01:07:11.866143 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.866231 kubelet[2623]: W1009 01:07:11.866150 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.866231 kubelet[2623]: E1009 01:07:11.866178 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.866313 kubelet[2623]: E1009 01:07:11.866296 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.866313 kubelet[2623]: W1009 01:07:11.866308 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.866379 kubelet[2623]: E1009 01:07:11.866322 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.866528 kubelet[2623]: E1009 01:07:11.866463 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.866528 kubelet[2623]: W1009 01:07:11.866473 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.866528 kubelet[2623]: E1009 01:07:11.866485 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.866712 kubelet[2623]: E1009 01:07:11.866634 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.866712 kubelet[2623]: W1009 01:07:11.866645 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.866712 kubelet[2623]: E1009 01:07:11.866669 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.866869 kubelet[2623]: E1009 01:07:11.866852 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.866869 kubelet[2623]: W1009 01:07:11.866865 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.867218 kubelet[2623]: E1009 01:07:11.866905 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.867218 kubelet[2623]: E1009 01:07:11.867056 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.867218 kubelet[2623]: W1009 01:07:11.867066 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.868327 kubelet[2623]: E1009 01:07:11.867100 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.868411 kubelet[2623]: E1009 01:07:11.867221 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.868411 kubelet[2623]: W1009 01:07:11.868354 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.868569 kubelet[2623]: E1009 01:07:11.868489 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.869003 kubelet[2623]: E1009 01:07:11.868978 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.869003 kubelet[2623]: W1009 01:07:11.868999 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.869003 kubelet[2623]: E1009 01:07:11.869034 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.869593 kubelet[2623]: E1009 01:07:11.869573 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.869593 kubelet[2623]: W1009 01:07:11.869590 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.869661 kubelet[2623]: E1009 01:07:11.869608 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.871488 kubelet[2623]: E1009 01:07:11.870686 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.871488 kubelet[2623]: W1009 01:07:11.870707 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.871488 kubelet[2623]: E1009 01:07:11.870831 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.871488 kubelet[2623]: E1009 01:07:11.870910 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.871488 kubelet[2623]: W1009 01:07:11.870918 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.871488 kubelet[2623]: E1009 01:07:11.870927 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.876680 systemd[1]: Started cri-containerd-6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7.scope - libcontainer container 6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7. Oct 9 01:07:11.878269 kubelet[2623]: E1009 01:07:11.878248 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:11.878269 kubelet[2623]: W1009 01:07:11.878263 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:11.878358 kubelet[2623]: E1009 01:07:11.878277 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:11.883562 kubelet[2623]: E1009 01:07:11.883530 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:11.884012 containerd[1447]: time="2024-10-09T01:07:11.883973289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mmclm,Uid:17f18265-8b74-4e7a-91ae-c90ca2d431b8,Namespace:calico-system,Attempt:0,}" Oct 9 01:07:11.907541 containerd[1447]: time="2024-10-09T01:07:11.907178155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:11.907753 containerd[1447]: time="2024-10-09T01:07:11.907566136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:11.907988 containerd[1447]: time="2024-10-09T01:07:11.907808068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:11.908815 containerd[1447]: time="2024-10-09T01:07:11.908677034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:11.916862 containerd[1447]: time="2024-10-09T01:07:11.916809184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-747ddf789f-tkmtj,Uid:8c853a3e-dcf4-4311-b7d0-caa109780dcc,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7\"" Oct 9 01:07:11.917767 kubelet[2623]: E1009 01:07:11.917586 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:11.918752 containerd[1447]: time="2024-10-09T01:07:11.918714684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 01:07:11.933007 systemd[1]: Started cri-containerd-cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406.scope - libcontainer container cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406. Oct 9 01:07:11.956782 containerd[1447]: time="2024-10-09T01:07:11.956714491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mmclm,Uid:17f18265-8b74-4e7a-91ae-c90ca2d431b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406\"" Oct 9 01:07:11.957913 kubelet[2623]: E1009 01:07:11.957478 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:13.507153 kubelet[2623]: E1009 01:07:13.506945 2623 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6frj" podUID="72a149ef-a469-42aa-b8b7-4b018e2ec3a1" Oct 9 01:07:14.352737 containerd[1447]: time="2024-10-09T01:07:14.352684093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:14.353613 containerd[1447]: time="2024-10-09T01:07:14.353458049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 9 01:07:14.354472 containerd[1447]: time="2024-10-09T01:07:14.354433735Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:14.356806 containerd[1447]: time="2024-10-09T01:07:14.356752123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:14.357712 containerd[1447]: time="2024-10-09T01:07:14.357245546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.43849914s" Oct 9 01:07:14.357712 containerd[1447]: time="2024-10-09T01:07:14.357278627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 9 01:07:14.358650 containerd[1447]: time="2024-10-09T01:07:14.358446602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 01:07:14.368918 containerd[1447]: time="2024-10-09T01:07:14.368885209Z" level=info msg="CreateContainer within sandbox \"6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 01:07:14.384893 containerd[1447]: time="2024-10-09T01:07:14.384092679Z" level=info msg="CreateContainer within sandbox \"6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f\"" Oct 9 01:07:14.385950 containerd[1447]: time="2024-10-09T01:07:14.385911524Z" level=info msg="StartContainer for \"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f\"" Oct 9 01:07:14.416044 systemd[1]: Started cri-containerd-db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f.scope - libcontainer container db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f. Oct 9 01:07:14.452228 containerd[1447]: time="2024-10-09T01:07:14.452187179Z" level=info msg="StartContainer for \"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f\" returns successfully" Oct 9 01:07:14.570933 containerd[1447]: time="2024-10-09T01:07:14.570715152Z" level=info msg="StopContainer for \"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f\" with timeout 300 (s)" Oct 9 01:07:14.575599 containerd[1447]: time="2024-10-09T01:07:14.575563619Z" level=info msg="Stop container \"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f\" with signal terminated" Oct 9 01:07:14.582078 kubelet[2623]: I1009 01:07:14.581989 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-747ddf789f-tkmtj" podStartSLOduration=1.141804777 podStartE2EDuration="3.581973878s" podCreationTimestamp="2024-10-09 01:07:11 +0000 UTC" firstStartedPulling="2024-10-09 01:07:11.918133614 +0000 UTC m=+24.499651419" lastFinishedPulling="2024-10-09 01:07:14.358302715 +0000 UTC m=+26.939820520" observedRunningTime="2024-10-09 01:07:14.581945877 +0000 UTC m=+27.163463842" watchObservedRunningTime="2024-10-09 01:07:14.581973878 +0000 UTC m=+27.163491683" Oct 9 01:07:14.586496 systemd[1]: cri-containerd-db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f.scope: Deactivated successfully. Oct 9 01:07:14.690933 containerd[1447]: time="2024-10-09T01:07:14.685707721Z" level=info msg="shim disconnected" id=db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f namespace=k8s.io Oct 9 01:07:14.690933 containerd[1447]: time="2024-10-09T01:07:14.690819160Z" level=warning msg="cleaning up after shim disconnected" id=db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f namespace=k8s.io Oct 9 01:07:14.690933 containerd[1447]: time="2024-10-09T01:07:14.690831040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:07:14.717620 containerd[1447]: time="2024-10-09T01:07:14.717523487Z" level=info msg="StopContainer for \"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f\" returns successfully" Oct 9 01:07:14.718331 containerd[1447]: time="2024-10-09T01:07:14.718304643Z" level=info msg="StopPodSandbox for \"6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7\"" Oct 9 01:07:14.718390 containerd[1447]: time="2024-10-09T01:07:14.718343205Z" level=info msg="Container to stop \"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:07:14.723572 systemd[1]: cri-containerd-6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7.scope: Deactivated successfully. Oct 9 01:07:14.747535 containerd[1447]: time="2024-10-09T01:07:14.747335399Z" level=info msg="shim disconnected" id=6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7 namespace=k8s.io Oct 9 01:07:14.747535 containerd[1447]: time="2024-10-09T01:07:14.747394641Z" level=warning msg="cleaning up after shim disconnected" id=6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7 namespace=k8s.io Oct 9 01:07:14.747535 containerd[1447]: time="2024-10-09T01:07:14.747403842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:07:14.756809 containerd[1447]: time="2024-10-09T01:07:14.756768079Z" level=warning msg="cleanup warnings time=\"2024-10-09T01:07:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 01:07:14.757665 containerd[1447]: time="2024-10-09T01:07:14.757635159Z" level=info msg="TearDown network for sandbox \"6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7\" successfully" Oct 9 01:07:14.757665 containerd[1447]: time="2024-10-09T01:07:14.757663801Z" level=info msg="StopPodSandbox for \"6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7\" returns successfully" Oct 9 01:07:14.783228 kubelet[2623]: I1009 01:07:14.783187 2623 topology_manager.go:215] "Topology Admit Handler" podUID="3ee5630d-5310-40e8-bf55-7007b700588f" podNamespace="calico-system" podName="calico-typha-86476c85d9-qjn5d" Oct 9 01:07:14.783447 kubelet[2623]: E1009 01:07:14.783258 2623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8c853a3e-dcf4-4311-b7d0-caa109780dcc" containerName="calico-typha" Oct 9 01:07:14.783447 kubelet[2623]: I1009 01:07:14.783283 2623 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c853a3e-dcf4-4311-b7d0-caa109780dcc" containerName="calico-typha" Oct 9 01:07:14.784346 kubelet[2623]: E1009 01:07:14.784286 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.784346 kubelet[2623]: W1009 01:07:14.784333 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.784346 kubelet[2623]: E1009 01:07:14.784350 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.784465 kubelet[2623]: I1009 01:07:14.784377 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jm89\" (UniqueName: \"kubernetes.io/projected/8c853a3e-dcf4-4311-b7d0-caa109780dcc-kube-api-access-6jm89\") pod \"8c853a3e-dcf4-4311-b7d0-caa109780dcc\" (UID: \"8c853a3e-dcf4-4311-b7d0-caa109780dcc\") " Oct 9 01:07:14.784690 kubelet[2623]: E1009 01:07:14.784640 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.784690 kubelet[2623]: W1009 01:07:14.784664 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.784690 kubelet[2623]: E1009 01:07:14.784677 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.785200 kubelet[2623]: I1009 01:07:14.784696 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8c853a3e-dcf4-4311-b7d0-caa109780dcc-typha-certs\") pod \"8c853a3e-dcf4-4311-b7d0-caa109780dcc\" (UID: \"8c853a3e-dcf4-4311-b7d0-caa109780dcc\") " Oct 9 01:07:14.785569 kubelet[2623]: E1009 01:07:14.785431 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.785569 kubelet[2623]: W1009 01:07:14.785450 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.785569 kubelet[2623]: E1009 01:07:14.785467 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.785569 kubelet[2623]: I1009 01:07:14.785489 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c853a3e-dcf4-4311-b7d0-caa109780dcc-tigera-ca-bundle\") pod \"8c853a3e-dcf4-4311-b7d0-caa109780dcc\" (UID: \"8c853a3e-dcf4-4311-b7d0-caa109780dcc\") " Oct 9 01:07:14.797535 systemd[1]: Created slice kubepods-besteffort-pod3ee5630d_5310_40e8_bf55_7007b700588f.slice - libcontainer container kubepods-besteffort-pod3ee5630d_5310_40e8_bf55_7007b700588f.slice. Oct 9 01:07:14.804546 kubelet[2623]: E1009 01:07:14.804518 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.804546 kubelet[2623]: W1009 01:07:14.804542 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.804655 kubelet[2623]: E1009 01:07:14.804565 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.805474 kubelet[2623]: E1009 01:07:14.805191 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.805474 kubelet[2623]: W1009 01:07:14.805213 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.805474 kubelet[2623]: E1009 01:07:14.805226 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.818368 kubelet[2623]: I1009 01:07:14.818338 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c853a3e-dcf4-4311-b7d0-caa109780dcc-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "8c853a3e-dcf4-4311-b7d0-caa109780dcc" (UID: "8c853a3e-dcf4-4311-b7d0-caa109780dcc"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 01:07:14.818565 kubelet[2623]: I1009 01:07:14.818533 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c853a3e-dcf4-4311-b7d0-caa109780dcc-kube-api-access-6jm89" (OuterVolumeSpecName: "kube-api-access-6jm89") pod "8c853a3e-dcf4-4311-b7d0-caa109780dcc" (UID: "8c853a3e-dcf4-4311-b7d0-caa109780dcc"). InnerVolumeSpecName "kube-api-access-6jm89". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 01:07:14.819756 kubelet[2623]: E1009 01:07:14.819723 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.819815 kubelet[2623]: W1009 01:07:14.819759 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.819815 kubelet[2623]: E1009 01:07:14.819774 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.820083 kubelet[2623]: I1009 01:07:14.820051 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c853a3e-dcf4-4311-b7d0-caa109780dcc-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "8c853a3e-dcf4-4311-b7d0-caa109780dcc" (UID: "8c853a3e-dcf4-4311-b7d0-caa109780dcc"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 01:07:14.867062 kubelet[2623]: E1009 01:07:14.867022 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.867062 kubelet[2623]: W1009 01:07:14.867046 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.867188 kubelet[2623]: E1009 01:07:14.867074 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.867325 kubelet[2623]: E1009 01:07:14.867311 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.867325 kubelet[2623]: W1009 01:07:14.867324 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.867380 kubelet[2623]: E1009 01:07:14.867334 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.867567 kubelet[2623]: E1009 01:07:14.867548 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.867567 kubelet[2623]: W1009 01:07:14.867561 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.867622 kubelet[2623]: E1009 01:07:14.867570 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.867765 kubelet[2623]: E1009 01:07:14.867733 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.867765 kubelet[2623]: W1009 01:07:14.867764 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.867822 kubelet[2623]: E1009 01:07:14.867775 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.867984 kubelet[2623]: E1009 01:07:14.867970 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.867984 kubelet[2623]: W1009 01:07:14.867982 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.868046 kubelet[2623]: E1009 01:07:14.867993 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.868225 kubelet[2623]: E1009 01:07:14.868209 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.868225 kubelet[2623]: W1009 01:07:14.868220 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.868273 kubelet[2623]: E1009 01:07:14.868229 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.868398 kubelet[2623]: E1009 01:07:14.868386 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.868398 kubelet[2623]: W1009 01:07:14.868397 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.868456 kubelet[2623]: E1009 01:07:14.868404 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.868551 kubelet[2623]: E1009 01:07:14.868539 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.868551 kubelet[2623]: W1009 01:07:14.868550 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.868603 kubelet[2623]: E1009 01:07:14.868558 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.868720 kubelet[2623]: E1009 01:07:14.868709 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.868766 kubelet[2623]: W1009 01:07:14.868720 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.868766 kubelet[2623]: E1009 01:07:14.868728 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.868905 kubelet[2623]: E1009 01:07:14.868883 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.868905 kubelet[2623]: W1009 01:07:14.868893 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.868956 kubelet[2623]: E1009 01:07:14.868910 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.869063 kubelet[2623]: E1009 01:07:14.869045 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.869063 kubelet[2623]: W1009 01:07:14.869059 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.869117 kubelet[2623]: E1009 01:07:14.869068 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.869301 kubelet[2623]: E1009 01:07:14.869291 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.869301 kubelet[2623]: W1009 01:07:14.869301 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.869351 kubelet[2623]: E1009 01:07:14.869308 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.886210 kubelet[2623]: E1009 01:07:14.886181 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.886210 kubelet[2623]: W1009 01:07:14.886202 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.886210 kubelet[2623]: E1009 01:07:14.886217 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.886347 kubelet[2623]: I1009 01:07:14.886243 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3ee5630d-5310-40e8-bf55-7007b700588f-typha-certs\") pod \"calico-typha-86476c85d9-qjn5d\" (UID: \"3ee5630d-5310-40e8-bf55-7007b700588f\") " pod="calico-system/calico-typha-86476c85d9-qjn5d" Oct 9 01:07:14.886485 kubelet[2623]: E1009 01:07:14.886460 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.886485 kubelet[2623]: W1009 01:07:14.886474 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.886531 kubelet[2623]: E1009 01:07:14.886488 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.886531 kubelet[2623]: I1009 01:07:14.886503 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsrfq\" (UniqueName: \"kubernetes.io/projected/3ee5630d-5310-40e8-bf55-7007b700588f-kube-api-access-dsrfq\") pod \"calico-typha-86476c85d9-qjn5d\" (UID: \"3ee5630d-5310-40e8-bf55-7007b700588f\") " pod="calico-system/calico-typha-86476c85d9-qjn5d" Oct 9 01:07:14.886749 kubelet[2623]: E1009 01:07:14.886722 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.886749 kubelet[2623]: W1009 01:07:14.886738 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.886808 kubelet[2623]: E1009 01:07:14.886765 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.886977 kubelet[2623]: E1009 01:07:14.886962 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.886977 kubelet[2623]: W1009 01:07:14.886974 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.887038 kubelet[2623]: E1009 01:07:14.886987 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.887428 kubelet[2623]: E1009 01:07:14.887214 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.887428 kubelet[2623]: W1009 01:07:14.887248 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.887428 kubelet[2623]: E1009 01:07:14.887273 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.887428 kubelet[2623]: I1009 01:07:14.887292 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ee5630d-5310-40e8-bf55-7007b700588f-tigera-ca-bundle\") pod \"calico-typha-86476c85d9-qjn5d\" (UID: \"3ee5630d-5310-40e8-bf55-7007b700588f\") " pod="calico-system/calico-typha-86476c85d9-qjn5d" Oct 9 01:07:14.887428 kubelet[2623]: I1009 01:07:14.887331 2623 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6jm89\" (UniqueName: \"kubernetes.io/projected/8c853a3e-dcf4-4311-b7d0-caa109780dcc-kube-api-access-6jm89\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:14.887428 kubelet[2623]: I1009 01:07:14.887343 2623 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8c853a3e-dcf4-4311-b7d0-caa109780dcc-typha-certs\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:14.887428 kubelet[2623]: I1009 01:07:14.887350 2623 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c853a3e-dcf4-4311-b7d0-caa109780dcc-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:14.887686 kubelet[2623]: E1009 01:07:14.887602 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.887686 kubelet[2623]: W1009 01:07:14.887617 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.887686 kubelet[2623]: E1009 01:07:14.887630 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.887824 kubelet[2623]: E1009 01:07:14.887806 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.887864 kubelet[2623]: W1009 01:07:14.887817 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.887918 kubelet[2623]: E1009 01:07:14.887889 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.888054 kubelet[2623]: E1009 01:07:14.888035 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.888054 kubelet[2623]: W1009 01:07:14.888049 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.888111 kubelet[2623]: E1009 01:07:14.888057 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.888494 kubelet[2623]: E1009 01:07:14.888279 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.888494 kubelet[2623]: W1009 01:07:14.888293 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.888494 kubelet[2623]: E1009 01:07:14.888302 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.987871 kubelet[2623]: E1009 01:07:14.987787 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.988145 kubelet[2623]: W1009 01:07:14.987986 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.988145 kubelet[2623]: E1009 01:07:14.988012 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.988367 kubelet[2623]: E1009 01:07:14.988241 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.988367 kubelet[2623]: W1009 01:07:14.988252 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.988367 kubelet[2623]: E1009 01:07:14.988267 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.988552 kubelet[2623]: E1009 01:07:14.988513 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.988552 kubelet[2623]: W1009 01:07:14.988532 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.988552 kubelet[2623]: E1009 01:07:14.988552 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.988949 kubelet[2623]: E1009 01:07:14.988923 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.988949 kubelet[2623]: W1009 01:07:14.988939 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.989032 kubelet[2623]: E1009 01:07:14.988956 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.989183 kubelet[2623]: E1009 01:07:14.989169 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.989183 kubelet[2623]: W1009 01:07:14.989183 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.989248 kubelet[2623]: E1009 01:07:14.989198 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.989471 kubelet[2623]: E1009 01:07:14.989456 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.989509 kubelet[2623]: W1009 01:07:14.989473 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.989509 kubelet[2623]: E1009 01:07:14.989500 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.989717 kubelet[2623]: E1009 01:07:14.989703 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.989717 kubelet[2623]: W1009 01:07:14.989716 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.989918 kubelet[2623]: E1009 01:07:14.989778 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.990104 kubelet[2623]: E1009 01:07:14.990084 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.990104 kubelet[2623]: W1009 01:07:14.990099 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.990156 kubelet[2623]: E1009 01:07:14.990124 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.990452 kubelet[2623]: E1009 01:07:14.990435 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.990452 kubelet[2623]: W1009 01:07:14.990449 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.990514 kubelet[2623]: E1009 01:07:14.990470 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.990861 kubelet[2623]: E1009 01:07:14.990840 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.990861 kubelet[2623]: W1009 01:07:14.990859 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.990925 kubelet[2623]: E1009 01:07:14.990876 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.991210 kubelet[2623]: E1009 01:07:14.991172 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.991210 kubelet[2623]: W1009 01:07:14.991188 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.991296 kubelet[2623]: E1009 01:07:14.991247 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.991838 kubelet[2623]: E1009 01:07:14.991483 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.991838 kubelet[2623]: W1009 01:07:14.991496 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.991838 kubelet[2623]: E1009 01:07:14.991507 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.991838 kubelet[2623]: E1009 01:07:14.991708 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.991838 kubelet[2623]: W1009 01:07:14.991719 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.991838 kubelet[2623]: E1009 01:07:14.991728 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.992116 kubelet[2623]: E1009 01:07:14.991878 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.992116 kubelet[2623]: W1009 01:07:14.991886 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.992116 kubelet[2623]: E1009 01:07:14.991894 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.992116 kubelet[2623]: E1009 01:07:14.992076 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.992116 kubelet[2623]: W1009 01:07:14.992085 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.992116 kubelet[2623]: E1009 01:07:14.992093 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.992669 kubelet[2623]: E1009 01:07:14.992650 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.992669 kubelet[2623]: W1009 01:07:14.992668 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.992738 kubelet[2623]: E1009 01:07:14.992680 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:14.993270 kubelet[2623]: E1009 01:07:14.993248 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:14.993306 kubelet[2623]: W1009 01:07:14.993270 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:14.993306 kubelet[2623]: E1009 01:07:14.993285 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:15.004036 kubelet[2623]: E1009 01:07:15.003857 2623 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:07:15.004036 kubelet[2623]: W1009 01:07:15.003882 2623 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:07:15.004036 kubelet[2623]: E1009 01:07:15.003896 2623 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:07:15.102706 kubelet[2623]: E1009 01:07:15.102606 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:15.103347 containerd[1447]: time="2024-10-09T01:07:15.103042944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86476c85d9-qjn5d,Uid:3ee5630d-5310-40e8-bf55-7007b700588f,Namespace:calico-system,Attempt:0,}" Oct 9 01:07:15.124044 containerd[1447]: time="2024-10-09T01:07:15.123877040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:15.124044 containerd[1447]: time="2024-10-09T01:07:15.123932522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:15.124044 containerd[1447]: time="2024-10-09T01:07:15.123950283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:15.124044 containerd[1447]: time="2024-10-09T01:07:15.124018726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:15.142933 systemd[1]: Started cri-containerd-c093b77e55143034388c0629ba57658df08901d91d5b2619429e24971b3a5b2f.scope - libcontainer container c093b77e55143034388c0629ba57658df08901d91d5b2619429e24971b3a5b2f. Oct 9 01:07:15.171505 containerd[1447]: time="2024-10-09T01:07:15.171444375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86476c85d9-qjn5d,Uid:3ee5630d-5310-40e8-bf55-7007b700588f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c093b77e55143034388c0629ba57658df08901d91d5b2619429e24971b3a5b2f\"" Oct 9 01:07:15.172099 kubelet[2623]: E1009 01:07:15.172082 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:15.187796 containerd[1447]: time="2024-10-09T01:07:15.187739267Z" level=info msg="CreateContainer within sandbox \"c093b77e55143034388c0629ba57658df08901d91d5b2619429e24971b3a5b2f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 01:07:15.200699 containerd[1447]: time="2024-10-09T01:07:15.200541122Z" level=info msg="CreateContainer within sandbox \"c093b77e55143034388c0629ba57658df08901d91d5b2619429e24971b3a5b2f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d28453ccc7c534c76a7b42784ca4458bd9454851aa4f73f6dcf326a3e2d535ee\"" Oct 9 01:07:15.201214 containerd[1447]: time="2024-10-09T01:07:15.201183991Z" level=info msg="StartContainer for \"d28453ccc7c534c76a7b42784ca4458bd9454851aa4f73f6dcf326a3e2d535ee\"" Oct 9 01:07:15.229447 systemd[1]: Started cri-containerd-d28453ccc7c534c76a7b42784ca4458bd9454851aa4f73f6dcf326a3e2d535ee.scope - libcontainer container d28453ccc7c534c76a7b42784ca4458bd9454851aa4f73f6dcf326a3e2d535ee. Oct 9 01:07:15.268681 containerd[1447]: time="2024-10-09T01:07:15.268556975Z" level=info msg="StartContainer for \"d28453ccc7c534c76a7b42784ca4458bd9454851aa4f73f6dcf326a3e2d535ee\" returns successfully" Oct 9 01:07:15.372951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f-rootfs.mount: Deactivated successfully. Oct 9 01:07:15.373038 systemd[1]: var-lib-kubelet-pods-8c853a3e\x2ddcf4\x2d4311\x2db7d0\x2dcaa109780dcc-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Oct 9 01:07:15.373093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7-rootfs.mount: Deactivated successfully. Oct 9 01:07:15.373139 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7-shm.mount: Deactivated successfully. Oct 9 01:07:15.373193 systemd[1]: var-lib-kubelet-pods-8c853a3e\x2ddcf4\x2d4311\x2db7d0\x2dcaa109780dcc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6jm89.mount: Deactivated successfully. Oct 9 01:07:15.373237 systemd[1]: var-lib-kubelet-pods-8c853a3e\x2ddcf4\x2d4311\x2db7d0\x2dcaa109780dcc-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Oct 9 01:07:15.421983 containerd[1447]: time="2024-10-09T01:07:15.421935461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:15.423183 containerd[1447]: time="2024-10-09T01:07:15.423141115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 9 01:07:15.424188 containerd[1447]: time="2024-10-09T01:07:15.424159561Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:15.426894 containerd[1447]: time="2024-10-09T01:07:15.426861562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:15.427826 containerd[1447]: time="2024-10-09T01:07:15.427451509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.068975546s" Oct 9 01:07:15.427826 containerd[1447]: time="2024-10-09T01:07:15.427493311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 9 01:07:15.431263 containerd[1447]: time="2024-10-09T01:07:15.431225478Z" level=info msg="CreateContainer within sandbox \"cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 01:07:15.445133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3562252705.mount: Deactivated successfully. Oct 9 01:07:15.448644 containerd[1447]: time="2024-10-09T01:07:15.448601499Z" level=info msg="CreateContainer within sandbox \"cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807\"" Oct 9 01:07:15.449158 containerd[1447]: time="2024-10-09T01:07:15.449135282Z" level=info msg="StartContainer for \"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807\"" Oct 9 01:07:15.477977 systemd[1]: Started cri-containerd-60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807.scope - libcontainer container 60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807. Oct 9 01:07:15.509261 kubelet[2623]: E1009 01:07:15.509146 2623 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6frj" podUID="72a149ef-a469-42aa-b8b7-4b018e2ec3a1" Oct 9 01:07:15.523830 containerd[1447]: time="2024-10-09T01:07:15.523559264Z" level=info msg="StartContainer for \"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807\" returns successfully" Oct 9 01:07:15.537812 systemd[1]: Removed slice kubepods-besteffort-pod8c853a3e_dcf4_4311_b7d0_caa109780dcc.slice - libcontainer container kubepods-besteffort-pod8c853a3e_dcf4_4311_b7d0_caa109780dcc.slice. Oct 9 01:07:15.552431 systemd[1]: cri-containerd-60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807.scope: Deactivated successfully. Oct 9 01:07:15.575899 containerd[1447]: time="2024-10-09T01:07:15.575740046Z" level=info msg="StopContainer for \"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807\" with timeout 5 (s)" Oct 9 01:07:15.578457 kubelet[2623]: I1009 01:07:15.578189 2623 scope.go:117] "RemoveContainer" containerID="db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f" Oct 9 01:07:15.581199 containerd[1447]: time="2024-10-09T01:07:15.581135569Z" level=info msg="Stop container \"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807\" with signal terminated" Oct 9 01:07:15.583169 containerd[1447]: time="2024-10-09T01:07:15.582568233Z" level=info msg="shim disconnected" id=60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807 namespace=k8s.io Oct 9 01:07:15.583169 containerd[1447]: time="2024-10-09T01:07:15.582616155Z" level=warning msg="cleaning up after shim disconnected" id=60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807 namespace=k8s.io Oct 9 01:07:15.583169 containerd[1447]: time="2024-10-09T01:07:15.582689358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:07:15.584024 containerd[1447]: time="2024-10-09T01:07:15.583810249Z" level=info msg="RemoveContainer for \"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f\"" Oct 9 01:07:15.590765 containerd[1447]: time="2024-10-09T01:07:15.590517470Z" level=info msg="RemoveContainer for \"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f\" returns successfully" Oct 9 01:07:15.593414 kubelet[2623]: I1009 01:07:15.590949 2623 scope.go:117] "RemoveContainer" containerID="db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f" Oct 9 01:07:15.594223 containerd[1447]: time="2024-10-09T01:07:15.593168629Z" level=error msg="ContainerStatus for \"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f\": not found" Oct 9 01:07:15.595763 kubelet[2623]: E1009 01:07:15.595060 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:15.598365 containerd[1447]: time="2024-10-09T01:07:15.598320500Z" level=warning msg="cleanup warnings time=\"2024-10-09T01:07:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 01:07:15.603085 kubelet[2623]: E1009 01:07:15.602416 2623 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f\": not found" containerID="db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f" Oct 9 01:07:15.603294 kubelet[2623]: I1009 01:07:15.603229 2623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f"} err="failed to get container status \"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"db573b0da87b75a3da8b632d01aee482d72a28d77a61eb2ade1498ad872f1a9f\": not found" Oct 9 01:07:15.614647 containerd[1447]: time="2024-10-09T01:07:15.614606071Z" level=info msg="StopContainer for \"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807\" returns successfully" Oct 9 01:07:15.615116 containerd[1447]: time="2024-10-09T01:07:15.615094773Z" level=info msg="StopPodSandbox for \"cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406\"" Oct 9 01:07:15.615164 containerd[1447]: time="2024-10-09T01:07:15.615130695Z" level=info msg="Container to stop \"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:07:15.617387 kubelet[2623]: I1009 01:07:15.617341 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86476c85d9-qjn5d" podStartSLOduration=3.6173258329999998 podStartE2EDuration="3.617325833s" podCreationTimestamp="2024-10-09 01:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:07:15.612156161 +0000 UTC m=+28.193673966" watchObservedRunningTime="2024-10-09 01:07:15.617325833 +0000 UTC m=+28.198843598" Oct 9 01:07:15.624585 systemd[1]: cri-containerd-cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406.scope: Deactivated successfully. Oct 9 01:07:15.646482 containerd[1447]: time="2024-10-09T01:07:15.646303334Z" level=info msg="shim disconnected" id=cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406 namespace=k8s.io Oct 9 01:07:15.646482 containerd[1447]: time="2024-10-09T01:07:15.646352977Z" level=warning msg="cleaning up after shim disconnected" id=cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406 namespace=k8s.io Oct 9 01:07:15.646482 containerd[1447]: time="2024-10-09T01:07:15.646361017Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:07:15.656831 containerd[1447]: time="2024-10-09T01:07:15.656413268Z" level=warning msg="cleanup warnings time=\"2024-10-09T01:07:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 01:07:15.657247 containerd[1447]: time="2024-10-09T01:07:15.657205904Z" level=info msg="TearDown network for sandbox \"cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406\" successfully" Oct 9 01:07:15.657247 containerd[1447]: time="2024-10-09T01:07:15.657235905Z" level=info msg="StopPodSandbox for \"cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406\" returns successfully" Oct 9 01:07:15.693771 kubelet[2623]: I1009 01:07:15.693414 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-xtables-lock\") pod \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " Oct 9 01:07:15.693771 kubelet[2623]: I1009 01:07:15.693469 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17f18265-8b74-4e7a-91ae-c90ca2d431b8-tigera-ca-bundle\") pod \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " Oct 9 01:07:15.693771 kubelet[2623]: I1009 01:07:15.693470 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "17f18265-8b74-4e7a-91ae-c90ca2d431b8" (UID: "17f18265-8b74-4e7a-91ae-c90ca2d431b8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:07:15.693771 kubelet[2623]: I1009 01:07:15.693510 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "17f18265-8b74-4e7a-91ae-c90ca2d431b8" (UID: "17f18265-8b74-4e7a-91ae-c90ca2d431b8"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:07:15.693771 kubelet[2623]: I1009 01:07:15.693485 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-cni-log-dir\") pod \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " Oct 9 01:07:15.694027 kubelet[2623]: I1009 01:07:15.693548 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwnmf\" (UniqueName: \"kubernetes.io/projected/17f18265-8b74-4e7a-91ae-c90ca2d431b8-kube-api-access-kwnmf\") pod \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " Oct 9 01:07:15.694027 kubelet[2623]: I1009 01:07:15.693567 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-cni-bin-dir\") pod \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " Oct 9 01:07:15.694027 kubelet[2623]: I1009 01:07:15.693583 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-policysync\") pod \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " Oct 9 01:07:15.694027 kubelet[2623]: I1009 01:07:15.693600 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-var-lib-calico\") pod \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " Oct 9 01:07:15.694027 kubelet[2623]: I1009 01:07:15.693639 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/17f18265-8b74-4e7a-91ae-c90ca2d431b8-node-certs\") pod \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " Oct 9 01:07:15.694027 kubelet[2623]: I1009 01:07:15.693654 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-cni-net-dir\") pod \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " Oct 9 01:07:15.694163 kubelet[2623]: I1009 01:07:15.693669 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-var-run-calico\") pod \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " Oct 9 01:07:15.694163 kubelet[2623]: I1009 01:07:15.693683 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-flexvol-driver-host\") pod \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " Oct 9 01:07:15.694163 kubelet[2623]: I1009 01:07:15.693700 2623 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-lib-modules\") pod \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\" (UID: \"17f18265-8b74-4e7a-91ae-c90ca2d431b8\") " Oct 9 01:07:15.694163 kubelet[2623]: I1009 01:07:15.693799 2623 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:15.694163 kubelet[2623]: I1009 01:07:15.693811 2623 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:15.694163 kubelet[2623]: I1009 01:07:15.693864 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f18265-8b74-4e7a-91ae-c90ca2d431b8-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "17f18265-8b74-4e7a-91ae-c90ca2d431b8" (UID: "17f18265-8b74-4e7a-91ae-c90ca2d431b8"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 01:07:15.694292 kubelet[2623]: I1009 01:07:15.693892 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "17f18265-8b74-4e7a-91ae-c90ca2d431b8" (UID: "17f18265-8b74-4e7a-91ae-c90ca2d431b8"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:07:15.694292 kubelet[2623]: I1009 01:07:15.694065 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "17f18265-8b74-4e7a-91ae-c90ca2d431b8" (UID: "17f18265-8b74-4e7a-91ae-c90ca2d431b8"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:07:15.694292 kubelet[2623]: I1009 01:07:15.694088 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "17f18265-8b74-4e7a-91ae-c90ca2d431b8" (UID: "17f18265-8b74-4e7a-91ae-c90ca2d431b8"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:07:15.694292 kubelet[2623]: I1009 01:07:15.694104 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "17f18265-8b74-4e7a-91ae-c90ca2d431b8" (UID: "17f18265-8b74-4e7a-91ae-c90ca2d431b8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:07:15.695113 kubelet[2623]: I1009 01:07:15.694359 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-policysync" (OuterVolumeSpecName: "policysync") pod "17f18265-8b74-4e7a-91ae-c90ca2d431b8" (UID: "17f18265-8b74-4e7a-91ae-c90ca2d431b8"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:07:15.695113 kubelet[2623]: I1009 01:07:15.694397 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "17f18265-8b74-4e7a-91ae-c90ca2d431b8" (UID: "17f18265-8b74-4e7a-91ae-c90ca2d431b8"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:07:15.695113 kubelet[2623]: I1009 01:07:15.694403 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "17f18265-8b74-4e7a-91ae-c90ca2d431b8" (UID: "17f18265-8b74-4e7a-91ae-c90ca2d431b8"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:07:15.696314 kubelet[2623]: I1009 01:07:15.696262 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f18265-8b74-4e7a-91ae-c90ca2d431b8-kube-api-access-kwnmf" (OuterVolumeSpecName: "kube-api-access-kwnmf") pod "17f18265-8b74-4e7a-91ae-c90ca2d431b8" (UID: "17f18265-8b74-4e7a-91ae-c90ca2d431b8"). InnerVolumeSpecName "kube-api-access-kwnmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 01:07:15.697873 kubelet[2623]: I1009 01:07:15.697818 2623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f18265-8b74-4e7a-91ae-c90ca2d431b8-node-certs" (OuterVolumeSpecName: "node-certs") pod "17f18265-8b74-4e7a-91ae-c90ca2d431b8" (UID: "17f18265-8b74-4e7a-91ae-c90ca2d431b8"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 01:07:15.794372 kubelet[2623]: I1009 01:07:15.793952 2623 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:15.794372 kubelet[2623]: I1009 01:07:15.793989 2623 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/17f18265-8b74-4e7a-91ae-c90ca2d431b8-node-certs\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:15.794372 kubelet[2623]: I1009 01:07:15.793998 2623 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:15.794372 kubelet[2623]: I1009 01:07:15.794006 2623 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-var-run-calico\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:15.794372 kubelet[2623]: I1009 01:07:15.794015 2623 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:15.794372 kubelet[2623]: I1009 01:07:15.794025 2623 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:15.794372 kubelet[2623]: I1009 01:07:15.794033 2623 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17f18265-8b74-4e7a-91ae-c90ca2d431b8-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:15.794372 kubelet[2623]: I1009 01:07:15.794040 2623 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kwnmf\" (UniqueName: \"kubernetes.io/projected/17f18265-8b74-4e7a-91ae-c90ca2d431b8-kube-api-access-kwnmf\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:15.794633 kubelet[2623]: I1009 01:07:15.794048 2623 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:15.794633 kubelet[2623]: I1009 01:07:15.794056 2623 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/17f18265-8b74-4e7a-91ae-c90ca2d431b8-policysync\") on node \"localhost\" DevicePath \"\"" Oct 9 01:07:16.367967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807-rootfs.mount: Deactivated successfully. Oct 9 01:07:16.368069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406-rootfs.mount: Deactivated successfully. Oct 9 01:07:16.368118 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406-shm.mount: Deactivated successfully. Oct 9 01:07:16.368170 systemd[1]: var-lib-kubelet-pods-17f18265\x2d8b74\x2d4e7a\x2d91ae\x2dc90ca2d431b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkwnmf.mount: Deactivated successfully. Oct 9 01:07:16.368220 systemd[1]: var-lib-kubelet-pods-17f18265\x2d8b74\x2d4e7a\x2d91ae\x2dc90ca2d431b8-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Oct 9 01:07:16.597151 kubelet[2623]: I1009 01:07:16.597068 2623 scope.go:117] "RemoveContainer" containerID="60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807" Oct 9 01:07:16.599447 containerd[1447]: time="2024-10-09T01:07:16.599128428Z" level=info msg="RemoveContainer for \"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807\"" Oct 9 01:07:16.603069 systemd[1]: Removed slice kubepods-besteffort-pod17f18265_8b74_4e7a_91ae_c90ca2d431b8.slice - libcontainer container kubepods-besteffort-pod17f18265_8b74_4e7a_91ae_c90ca2d431b8.slice. Oct 9 01:07:16.604007 containerd[1447]: time="2024-10-09T01:07:16.603906235Z" level=info msg="RemoveContainer for \"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807\" returns successfully" Oct 9 01:07:16.604312 kubelet[2623]: I1009 01:07:16.604125 2623 scope.go:117] "RemoveContainer" containerID="60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807" Oct 9 01:07:16.605113 kubelet[2623]: E1009 01:07:16.604427 2623 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807\": not found" containerID="60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807" Oct 9 01:07:16.605113 kubelet[2623]: I1009 01:07:16.604476 2623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807"} err="failed to get container status \"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807\": rpc error: code = NotFound desc = an error occurred when try to find container \"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807\": not found" Oct 9 01:07:16.605205 containerd[1447]: time="2024-10-09T01:07:16.604303172Z" level=error msg="ContainerStatus for \"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60739826f014438b34178c76cf680c5abf1191084753c4d826cf379d7e847807\": not found" Oct 9 01:07:16.640445 kubelet[2623]: I1009 01:07:16.640302 2623 topology_manager.go:215] "Topology Admit Handler" podUID="8827e850-bbcb-4f14-8ab2-755d9e98e201" podNamespace="calico-system" podName="calico-node-xvd9g" Oct 9 01:07:16.640445 kubelet[2623]: E1009 01:07:16.640367 2623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="17f18265-8b74-4e7a-91ae-c90ca2d431b8" containerName="flexvol-driver" Oct 9 01:07:16.640445 kubelet[2623]: I1009 01:07:16.640393 2623 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f18265-8b74-4e7a-91ae-c90ca2d431b8" containerName="flexvol-driver" Oct 9 01:07:16.652530 systemd[1]: Created slice kubepods-besteffort-pod8827e850_bbcb_4f14_8ab2_755d9e98e201.slice - libcontainer container kubepods-besteffort-pod8827e850_bbcb_4f14_8ab2_755d9e98e201.slice. Oct 9 01:07:16.699690 kubelet[2623]: I1009 01:07:16.699644 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8827e850-bbcb-4f14-8ab2-755d9e98e201-node-certs\") pod \"calico-node-xvd9g\" (UID: \"8827e850-bbcb-4f14-8ab2-755d9e98e201\") " pod="calico-system/calico-node-xvd9g" Oct 9 01:07:16.699835 kubelet[2623]: I1009 01:07:16.699704 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8827e850-bbcb-4f14-8ab2-755d9e98e201-cni-bin-dir\") pod \"calico-node-xvd9g\" (UID: \"8827e850-bbcb-4f14-8ab2-755d9e98e201\") " pod="calico-system/calico-node-xvd9g" Oct 9 01:07:16.699835 kubelet[2623]: I1009 01:07:16.699725 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8827e850-bbcb-4f14-8ab2-755d9e98e201-lib-modules\") pod \"calico-node-xvd9g\" (UID: \"8827e850-bbcb-4f14-8ab2-755d9e98e201\") " pod="calico-system/calico-node-xvd9g" Oct 9 01:07:16.699835 kubelet[2623]: I1009 01:07:16.699756 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8827e850-bbcb-4f14-8ab2-755d9e98e201-var-run-calico\") pod \"calico-node-xvd9g\" (UID: \"8827e850-bbcb-4f14-8ab2-755d9e98e201\") " pod="calico-system/calico-node-xvd9g" Oct 9 01:07:16.699835 kubelet[2623]: I1009 01:07:16.699774 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8827e850-bbcb-4f14-8ab2-755d9e98e201-cni-net-dir\") pod \"calico-node-xvd9g\" (UID: \"8827e850-bbcb-4f14-8ab2-755d9e98e201\") " pod="calico-system/calico-node-xvd9g" Oct 9 01:07:16.699835 kubelet[2623]: I1009 01:07:16.699807 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8827e850-bbcb-4f14-8ab2-755d9e98e201-var-lib-calico\") pod \"calico-node-xvd9g\" (UID: \"8827e850-bbcb-4f14-8ab2-755d9e98e201\") " pod="calico-system/calico-node-xvd9g" Oct 9 01:07:16.699952 kubelet[2623]: I1009 01:07:16.699823 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8827e850-bbcb-4f14-8ab2-755d9e98e201-cni-log-dir\") pod \"calico-node-xvd9g\" (UID: \"8827e850-bbcb-4f14-8ab2-755d9e98e201\") " pod="calico-system/calico-node-xvd9g" Oct 9 01:07:16.699952 kubelet[2623]: I1009 01:07:16.699838 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8827e850-bbcb-4f14-8ab2-755d9e98e201-flexvol-driver-host\") pod \"calico-node-xvd9g\" (UID: \"8827e850-bbcb-4f14-8ab2-755d9e98e201\") " pod="calico-system/calico-node-xvd9g" Oct 9 01:07:16.699952 kubelet[2623]: I1009 01:07:16.699855 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8827e850-bbcb-4f14-8ab2-755d9e98e201-xtables-lock\") pod \"calico-node-xvd9g\" (UID: \"8827e850-bbcb-4f14-8ab2-755d9e98e201\") " pod="calico-system/calico-node-xvd9g" Oct 9 01:07:16.699952 kubelet[2623]: I1009 01:07:16.699869 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8827e850-bbcb-4f14-8ab2-755d9e98e201-policysync\") pod \"calico-node-xvd9g\" (UID: \"8827e850-bbcb-4f14-8ab2-755d9e98e201\") " pod="calico-system/calico-node-xvd9g" Oct 9 01:07:16.699952 kubelet[2623]: I1009 01:07:16.699889 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8827e850-bbcb-4f14-8ab2-755d9e98e201-tigera-ca-bundle\") pod \"calico-node-xvd9g\" (UID: \"8827e850-bbcb-4f14-8ab2-755d9e98e201\") " pod="calico-system/calico-node-xvd9g" Oct 9 01:07:16.700069 kubelet[2623]: I1009 01:07:16.699914 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdht5\" (UniqueName: \"kubernetes.io/projected/8827e850-bbcb-4f14-8ab2-755d9e98e201-kube-api-access-zdht5\") pod \"calico-node-xvd9g\" (UID: \"8827e850-bbcb-4f14-8ab2-755d9e98e201\") " pod="calico-system/calico-node-xvd9g" Oct 9 01:07:16.961885 kubelet[2623]: E1009 01:07:16.961087 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:16.962021 containerd[1447]: time="2024-10-09T01:07:16.961693056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xvd9g,Uid:8827e850-bbcb-4f14-8ab2-755d9e98e201,Namespace:calico-system,Attempt:0,}" Oct 9 01:07:16.981780 containerd[1447]: time="2024-10-09T01:07:16.981499552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:16.981780 containerd[1447]: time="2024-10-09T01:07:16.981572635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:16.981780 containerd[1447]: time="2024-10-09T01:07:16.981589756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:16.982234 containerd[1447]: time="2024-10-09T01:07:16.982151220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:17.001939 systemd[1]: Started cri-containerd-3413e01305e1316fad99c14e2934e2203a7c11c9280b164d238582e9ba4065ef.scope - libcontainer container 3413e01305e1316fad99c14e2934e2203a7c11c9280b164d238582e9ba4065ef. Oct 9 01:07:17.020731 containerd[1447]: time="2024-10-09T01:07:17.020591812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xvd9g,Uid:8827e850-bbcb-4f14-8ab2-755d9e98e201,Namespace:calico-system,Attempt:0,} returns sandbox id \"3413e01305e1316fad99c14e2934e2203a7c11c9280b164d238582e9ba4065ef\"" Oct 9 01:07:17.021499 kubelet[2623]: E1009 01:07:17.021474 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:17.024111 containerd[1447]: time="2024-10-09T01:07:17.024046996Z" level=info msg="CreateContainer within sandbox \"3413e01305e1316fad99c14e2934e2203a7c11c9280b164d238582e9ba4065ef\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 01:07:17.106950 containerd[1447]: time="2024-10-09T01:07:17.106891405Z" level=info msg="CreateContainer within sandbox \"3413e01305e1316fad99c14e2934e2203a7c11c9280b164d238582e9ba4065ef\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e9bb2cd7a238edaee100132ce37747e29d55395e7c9917a7419053fdd376417a\"" Oct 9 01:07:17.107549 containerd[1447]: time="2024-10-09T01:07:17.107386466Z" level=info msg="StartContainer for \"e9bb2cd7a238edaee100132ce37747e29d55395e7c9917a7419053fdd376417a\"" Oct 9 01:07:17.133916 systemd[1]: Started cri-containerd-e9bb2cd7a238edaee100132ce37747e29d55395e7c9917a7419053fdd376417a.scope - libcontainer container e9bb2cd7a238edaee100132ce37747e29d55395e7c9917a7419053fdd376417a. Oct 9 01:07:17.159164 containerd[1447]: time="2024-10-09T01:07:17.159058458Z" level=info msg="StartContainer for \"e9bb2cd7a238edaee100132ce37747e29d55395e7c9917a7419053fdd376417a\" returns successfully" Oct 9 01:07:17.170262 systemd[1]: cri-containerd-e9bb2cd7a238edaee100132ce37747e29d55395e7c9917a7419053fdd376417a.scope: Deactivated successfully. Oct 9 01:07:17.197936 containerd[1447]: time="2024-10-09T01:07:17.197727188Z" level=info msg="shim disconnected" id=e9bb2cd7a238edaee100132ce37747e29d55395e7c9917a7419053fdd376417a namespace=k8s.io Oct 9 01:07:17.197936 containerd[1447]: time="2024-10-09T01:07:17.197791550Z" level=warning msg="cleaning up after shim disconnected" id=e9bb2cd7a238edaee100132ce37747e29d55395e7c9917a7419053fdd376417a namespace=k8s.io Oct 9 01:07:17.197936 containerd[1447]: time="2024-10-09T01:07:17.197799951Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:07:17.506209 kubelet[2623]: E1009 01:07:17.506158 2623 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6frj" podUID="72a149ef-a469-42aa-b8b7-4b018e2ec3a1" Oct 9 01:07:17.508602 kubelet[2623]: I1009 01:07:17.508576 2623 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f18265-8b74-4e7a-91ae-c90ca2d431b8" path="/var/lib/kubelet/pods/17f18265-8b74-4e7a-91ae-c90ca2d431b8/volumes" Oct 9 01:07:17.509019 kubelet[2623]: I1009 01:07:17.508992 2623 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c853a3e-dcf4-4311-b7d0-caa109780dcc" path="/var/lib/kubelet/pods/8c853a3e-dcf4-4311-b7d0-caa109780dcc/volumes" Oct 9 01:07:17.601383 kubelet[2623]: E1009 01:07:17.600494 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:17.601988 containerd[1447]: time="2024-10-09T01:07:17.601950539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 01:07:18.899553 systemd[1]: Started sshd@7-10.0.0.142:22-10.0.0.1:43888.service - OpenSSH per-connection server daemon (10.0.0.1:43888). Oct 9 01:07:18.941369 sshd[3637]: Accepted publickey for core from 10.0.0.1 port 43888 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:07:18.942704 sshd[3637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:18.946553 systemd-logind[1432]: New session 8 of user core. Oct 9 01:07:18.954879 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 01:07:19.070265 sshd[3637]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:19.073502 systemd[1]: sshd@7-10.0.0.142:22-10.0.0.1:43888.service: Deactivated successfully. Oct 9 01:07:19.075673 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 01:07:19.076573 systemd-logind[1432]: Session 8 logged out. Waiting for processes to exit. Oct 9 01:07:19.077561 systemd-logind[1432]: Removed session 8. Oct 9 01:07:19.506711 kubelet[2623]: E1009 01:07:19.506646 2623 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6frj" podUID="72a149ef-a469-42aa-b8b7-4b018e2ec3a1" Oct 9 01:07:21.506624 kubelet[2623]: E1009 01:07:21.506564 2623 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6frj" podUID="72a149ef-a469-42aa-b8b7-4b018e2ec3a1" Oct 9 01:07:22.181292 containerd[1447]: time="2024-10-09T01:07:22.181233246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:22.182037 containerd[1447]: time="2024-10-09T01:07:22.181986472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 9 01:07:22.182559 containerd[1447]: time="2024-10-09T01:07:22.182518571Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:22.184859 containerd[1447]: time="2024-10-09T01:07:22.184819011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:22.185641 containerd[1447]: time="2024-10-09T01:07:22.185598559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 4.583605898s" Oct 9 01:07:22.185641 containerd[1447]: time="2024-10-09T01:07:22.185631520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 9 01:07:22.188842 containerd[1447]: time="2024-10-09T01:07:22.188804031Z" level=info msg="CreateContainer within sandbox \"3413e01305e1316fad99c14e2934e2203a7c11c9280b164d238582e9ba4065ef\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 01:07:22.313259 containerd[1447]: time="2024-10-09T01:07:22.313203360Z" level=info msg="CreateContainer within sandbox \"3413e01305e1316fad99c14e2934e2203a7c11c9280b164d238582e9ba4065ef\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8ad5cf75dd85714cb95f7245b9c4cd85cba9f46b3f9fa7007186d214d1a9e293\"" Oct 9 01:07:22.313993 containerd[1447]: time="2024-10-09T01:07:22.313964507Z" level=info msg="StartContainer for \"8ad5cf75dd85714cb95f7245b9c4cd85cba9f46b3f9fa7007186d214d1a9e293\"" Oct 9 01:07:22.346917 systemd[1]: Started cri-containerd-8ad5cf75dd85714cb95f7245b9c4cd85cba9f46b3f9fa7007186d214d1a9e293.scope - libcontainer container 8ad5cf75dd85714cb95f7245b9c4cd85cba9f46b3f9fa7007186d214d1a9e293. Oct 9 01:07:22.379140 containerd[1447]: time="2024-10-09T01:07:22.379090594Z" level=info msg="StartContainer for \"8ad5cf75dd85714cb95f7245b9c4cd85cba9f46b3f9fa7007186d214d1a9e293\" returns successfully" Oct 9 01:07:22.612261 kubelet[2623]: E1009 01:07:22.612015 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:22.957821 systemd[1]: cri-containerd-8ad5cf75dd85714cb95f7245b9c4cd85cba9f46b3f9fa7007186d214d1a9e293.scope: Deactivated successfully. Oct 9 01:07:22.974527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ad5cf75dd85714cb95f7245b9c4cd85cba9f46b3f9fa7007186d214d1a9e293-rootfs.mount: Deactivated successfully. Oct 9 01:07:22.984061 containerd[1447]: time="2024-10-09T01:07:22.983957157Z" level=info msg="shim disconnected" id=8ad5cf75dd85714cb95f7245b9c4cd85cba9f46b3f9fa7007186d214d1a9e293 namespace=k8s.io Oct 9 01:07:22.984061 containerd[1447]: time="2024-10-09T01:07:22.984054801Z" level=warning msg="cleaning up after shim disconnected" id=8ad5cf75dd85714cb95f7245b9c4cd85cba9f46b3f9fa7007186d214d1a9e293 namespace=k8s.io Oct 9 01:07:22.984061 containerd[1447]: time="2024-10-09T01:07:22.984066041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:07:23.005323 kubelet[2623]: I1009 01:07:23.005112 2623 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 01:07:23.025816 kubelet[2623]: I1009 01:07:23.025769 2623 topology_manager.go:215] "Topology Admit Handler" podUID="62fb3473-3704-419a-9a60-2f9f5f1e2c3b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-h2g6g" Oct 9 01:07:23.027628 kubelet[2623]: I1009 01:07:23.027574 2623 topology_manager.go:215] "Topology Admit Handler" podUID="365f47e1-51b7-479f-b333-b53fb198b0fd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hztch" Oct 9 01:07:23.028048 kubelet[2623]: I1009 01:07:23.027719 2623 topology_manager.go:215] "Topology Admit Handler" podUID="6e9dceb0-a563-4980-b699-4a2e937ee8e7" podNamespace="calico-system" podName="calico-kube-controllers-8c6f8c56b-smw77" Oct 9 01:07:23.036043 systemd[1]: Created slice kubepods-burstable-pod62fb3473_3704_419a_9a60_2f9f5f1e2c3b.slice - libcontainer container kubepods-burstable-pod62fb3473_3704_419a_9a60_2f9f5f1e2c3b.slice. Oct 9 01:07:23.042759 kubelet[2623]: I1009 01:07:23.040791 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58fjx\" (UniqueName: \"kubernetes.io/projected/6e9dceb0-a563-4980-b699-4a2e937ee8e7-kube-api-access-58fjx\") pod \"calico-kube-controllers-8c6f8c56b-smw77\" (UID: \"6e9dceb0-a563-4980-b699-4a2e937ee8e7\") " pod="calico-system/calico-kube-controllers-8c6f8c56b-smw77" Oct 9 01:07:23.042759 kubelet[2623]: I1009 01:07:23.040831 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62fb3473-3704-419a-9a60-2f9f5f1e2c3b-config-volume\") pod \"coredns-7db6d8ff4d-h2g6g\" (UID: \"62fb3473-3704-419a-9a60-2f9f5f1e2c3b\") " pod="kube-system/coredns-7db6d8ff4d-h2g6g" Oct 9 01:07:23.042759 kubelet[2623]: I1009 01:07:23.040849 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365f47e1-51b7-479f-b333-b53fb198b0fd-config-volume\") pod \"coredns-7db6d8ff4d-hztch\" (UID: \"365f47e1-51b7-479f-b333-b53fb198b0fd\") " pod="kube-system/coredns-7db6d8ff4d-hztch" Oct 9 01:07:23.042759 kubelet[2623]: I1009 01:07:23.040869 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnsxg\" (UniqueName: \"kubernetes.io/projected/62fb3473-3704-419a-9a60-2f9f5f1e2c3b-kube-api-access-tnsxg\") pod \"coredns-7db6d8ff4d-h2g6g\" (UID: \"62fb3473-3704-419a-9a60-2f9f5f1e2c3b\") " pod="kube-system/coredns-7db6d8ff4d-h2g6g" Oct 9 01:07:23.042759 kubelet[2623]: I1009 01:07:23.040890 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nflbk\" (UniqueName: \"kubernetes.io/projected/365f47e1-51b7-479f-b333-b53fb198b0fd-kube-api-access-nflbk\") pod \"coredns-7db6d8ff4d-hztch\" (UID: \"365f47e1-51b7-479f-b333-b53fb198b0fd\") " pod="kube-system/coredns-7db6d8ff4d-hztch" Oct 9 01:07:23.042924 kubelet[2623]: I1009 01:07:23.040908 2623 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e9dceb0-a563-4980-b699-4a2e937ee8e7-tigera-ca-bundle\") pod \"calico-kube-controllers-8c6f8c56b-smw77\" (UID: \"6e9dceb0-a563-4980-b699-4a2e937ee8e7\") " pod="calico-system/calico-kube-controllers-8c6f8c56b-smw77" Oct 9 01:07:23.043776 systemd[1]: Created slice kubepods-burstable-pod365f47e1_51b7_479f_b333_b53fb198b0fd.slice - libcontainer container kubepods-burstable-pod365f47e1_51b7_479f_b333_b53fb198b0fd.slice. Oct 9 01:07:23.048591 systemd[1]: Created slice kubepods-besteffort-pod6e9dceb0_a563_4980_b699_4a2e937ee8e7.slice - libcontainer container kubepods-besteffort-pod6e9dceb0_a563_4980_b699_4a2e937ee8e7.slice. Oct 9 01:07:23.341257 kubelet[2623]: E1009 01:07:23.341155 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:23.341736 containerd[1447]: time="2024-10-09T01:07:23.341697316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2g6g,Uid:62fb3473-3704-419a-9a60-2f9f5f1e2c3b,Namespace:kube-system,Attempt:0,}" Oct 9 01:07:23.347019 kubelet[2623]: E1009 01:07:23.346992 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:23.348536 containerd[1447]: time="2024-10-09T01:07:23.348502868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hztch,Uid:365f47e1-51b7-479f-b333-b53fb198b0fd,Namespace:kube-system,Attempt:0,}" Oct 9 01:07:23.351287 containerd[1447]: time="2024-10-09T01:07:23.351255642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c6f8c56b-smw77,Uid:6e9dceb0-a563-4980-b699-4a2e937ee8e7,Namespace:calico-system,Attempt:0,}" Oct 9 01:07:23.525382 systemd[1]: Created slice kubepods-besteffort-pod72a149ef_a469_42aa_b8b7_4b018e2ec3a1.slice - libcontainer container kubepods-besteffort-pod72a149ef_a469_42aa_b8b7_4b018e2ec3a1.slice. Oct 9 01:07:23.535607 containerd[1447]: time="2024-10-09T01:07:23.535269948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t6frj,Uid:72a149ef-a469-42aa-b8b7-4b018e2ec3a1,Namespace:calico-system,Attempt:0,}" Oct 9 01:07:23.619590 kubelet[2623]: E1009 01:07:23.615638 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:23.623571 containerd[1447]: time="2024-10-09T01:07:23.623529313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 01:07:23.671412 containerd[1447]: time="2024-10-09T01:07:23.670926727Z" level=error msg="Failed to destroy network for sandbox \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.671412 containerd[1447]: time="2024-10-09T01:07:23.671327620Z" level=error msg="encountered an error cleaning up failed sandbox \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.671412 containerd[1447]: time="2024-10-09T01:07:23.671382662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2g6g,Uid:62fb3473-3704-419a-9a60-2f9f5f1e2c3b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.671605 containerd[1447]: time="2024-10-09T01:07:23.671519067Z" level=error msg="Failed to destroy network for sandbox \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.671875 containerd[1447]: time="2024-10-09T01:07:23.671848958Z" level=error msg="encountered an error cleaning up failed sandbox \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.671929 containerd[1447]: time="2024-10-09T01:07:23.671888559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hztch,Uid:365f47e1-51b7-479f-b333-b53fb198b0fd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.672064 kubelet[2623]: E1009 01:07:23.672033 2623 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.672113 kubelet[2623]: E1009 01:07:23.672093 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hztch" Oct 9 01:07:23.672140 kubelet[2623]: E1009 01:07:23.672113 2623 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hztch" Oct 9 01:07:23.672169 kubelet[2623]: E1009 01:07:23.672150 2623 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hztch_kube-system(365f47e1-51b7-479f-b333-b53fb198b0fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hztch_kube-system(365f47e1-51b7-479f-b333-b53fb198b0fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hztch" podUID="365f47e1-51b7-479f-b333-b53fb198b0fd" Oct 9 01:07:23.676593 kubelet[2623]: E1009 01:07:23.676542 2623 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.676678 kubelet[2623]: E1009 01:07:23.676599 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2g6g" Oct 9 01:07:23.676678 kubelet[2623]: E1009 01:07:23.676628 2623 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h2g6g" Oct 9 01:07:23.676678 kubelet[2623]: E1009 01:07:23.676667 2623 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-h2g6g_kube-system(62fb3473-3704-419a-9a60-2f9f5f1e2c3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-h2g6g_kube-system(62fb3473-3704-419a-9a60-2f9f5f1e2c3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h2g6g" podUID="62fb3473-3704-419a-9a60-2f9f5f1e2c3b" Oct 9 01:07:23.685323 containerd[1447]: time="2024-10-09T01:07:23.684917243Z" level=error msg="Failed to destroy network for sandbox \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.685323 containerd[1447]: time="2024-10-09T01:07:23.685204253Z" level=error msg="encountered an error cleaning up failed sandbox \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.685323 containerd[1447]: time="2024-10-09T01:07:23.685241974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t6frj,Uid:72a149ef-a469-42aa-b8b7-4b018e2ec3a1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.685619 kubelet[2623]: E1009 01:07:23.685561 2623 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.685619 kubelet[2623]: E1009 01:07:23.685614 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t6frj" Oct 9 01:07:23.685697 kubelet[2623]: E1009 01:07:23.685631 2623 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t6frj" Oct 9 01:07:23.685697 kubelet[2623]: E1009 01:07:23.685663 2623 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t6frj_calico-system(72a149ef-a469-42aa-b8b7-4b018e2ec3a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t6frj_calico-system(72a149ef-a469-42aa-b8b7-4b018e2ec3a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t6frj" podUID="72a149ef-a469-42aa-b8b7-4b018e2ec3a1" Oct 9 01:07:23.689789 containerd[1447]: time="2024-10-09T01:07:23.689693446Z" level=error msg="Failed to destroy network for sandbox \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.690052 containerd[1447]: time="2024-10-09T01:07:23.690011497Z" level=error msg="encountered an error cleaning up failed sandbox \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.690089 containerd[1447]: time="2024-10-09T01:07:23.690059858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c6f8c56b-smw77,Uid:6e9dceb0-a563-4980-b699-4a2e937ee8e7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.690257 kubelet[2623]: E1009 01:07:23.690220 2623 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:23.690299 kubelet[2623]: E1009 01:07:23.690268 2623 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8c6f8c56b-smw77" Oct 9 01:07:23.690299 kubelet[2623]: E1009 01:07:23.690285 2623 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8c6f8c56b-smw77" Oct 9 01:07:23.690359 kubelet[2623]: E1009 01:07:23.690319 2623 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8c6f8c56b-smw77_calico-system(6e9dceb0-a563-4980-b699-4a2e937ee8e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8c6f8c56b-smw77_calico-system(6e9dceb0-a563-4980-b699-4a2e937ee8e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8c6f8c56b-smw77" podUID="6e9dceb0-a563-4980-b699-4a2e937ee8e7" Oct 9 01:07:24.083385 systemd[1]: Started sshd@8-10.0.0.142:22-10.0.0.1:48722.service - OpenSSH per-connection server daemon (10.0.0.1:48722). Oct 9 01:07:24.123442 sshd[3879]: Accepted publickey for core from 10.0.0.1 port 48722 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:07:24.124654 sshd[3879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:24.128227 systemd-logind[1432]: New session 9 of user core. Oct 9 01:07:24.137891 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 01:07:24.247358 sshd[3879]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:24.250310 systemd[1]: sshd@8-10.0.0.142:22-10.0.0.1:48722.service: Deactivated successfully. Oct 9 01:07:24.251971 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 01:07:24.252546 systemd-logind[1432]: Session 9 logged out. Waiting for processes to exit. Oct 9 01:07:24.254187 systemd-logind[1432]: Removed session 9. Oct 9 01:07:24.308011 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792-shm.mount: Deactivated successfully. Oct 9 01:07:24.308106 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12-shm.mount: Deactivated successfully. Oct 9 01:07:24.308157 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a-shm.mount: Deactivated successfully. Oct 9 01:07:24.618213 kubelet[2623]: I1009 01:07:24.618184 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Oct 9 01:07:24.618738 containerd[1447]: time="2024-10-09T01:07:24.618703099Z" level=info msg="StopPodSandbox for \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\"" Oct 9 01:07:24.619046 containerd[1447]: time="2024-10-09T01:07:24.618880825Z" level=info msg="Ensure that sandbox 48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f in task-service has been cleanup successfully" Oct 9 01:07:24.619919 kubelet[2623]: I1009 01:07:24.619880 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Oct 9 01:07:24.620689 containerd[1447]: time="2024-10-09T01:07:24.620347113Z" level=info msg="StopPodSandbox for \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\"" Oct 9 01:07:24.620689 containerd[1447]: time="2024-10-09T01:07:24.620491918Z" level=info msg="Ensure that sandbox a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792 in task-service has been cleanup successfully" Oct 9 01:07:24.621406 kubelet[2623]: I1009 01:07:24.621364 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Oct 9 01:07:24.622566 containerd[1447]: time="2024-10-09T01:07:24.622531825Z" level=info msg="StopPodSandbox for \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\"" Oct 9 01:07:24.622694 containerd[1447]: time="2024-10-09T01:07:24.622675990Z" level=info msg="Ensure that sandbox c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a in task-service has been cleanup successfully" Oct 9 01:07:24.624393 kubelet[2623]: I1009 01:07:24.623538 2623 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Oct 9 01:07:24.624468 containerd[1447]: time="2024-10-09T01:07:24.623954432Z" level=info msg="StopPodSandbox for \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\"" Oct 9 01:07:24.624468 containerd[1447]: time="2024-10-09T01:07:24.624109398Z" level=info msg="Ensure that sandbox ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12 in task-service has been cleanup successfully" Oct 9 01:07:24.651877 containerd[1447]: time="2024-10-09T01:07:24.651814953Z" level=error msg="StopPodSandbox for \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\" failed" error="failed to destroy network for sandbox \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:24.652027 containerd[1447]: time="2024-10-09T01:07:24.651833634Z" level=error msg="StopPodSandbox for \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\" failed" error="failed to destroy network for sandbox \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:24.655015 kubelet[2623]: E1009 01:07:24.654976 2623 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Oct 9 01:07:24.655088 kubelet[2623]: E1009 01:07:24.655049 2623 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f"} Oct 9 01:07:24.655118 kubelet[2623]: E1009 01:07:24.655085 2623 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72a149ef-a469-42aa-b8b7-4b018e2ec3a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:07:24.655174 kubelet[2623]: E1009 01:07:24.655126 2623 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72a149ef-a469-42aa-b8b7-4b018e2ec3a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t6frj" podUID="72a149ef-a469-42aa-b8b7-4b018e2ec3a1" Oct 9 01:07:24.655335 kubelet[2623]: E1009 01:07:24.655299 2623 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Oct 9 01:07:24.655372 kubelet[2623]: E1009 01:07:24.655348 2623 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792"} Oct 9 01:07:24.655402 kubelet[2623]: E1009 01:07:24.655376 2623 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e9dceb0-a563-4980-b699-4a2e937ee8e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:07:24.655437 kubelet[2623]: E1009 01:07:24.655395 2623 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e9dceb0-a563-4980-b699-4a2e937ee8e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8c6f8c56b-smw77" podUID="6e9dceb0-a563-4980-b699-4a2e937ee8e7" Oct 9 01:07:24.658276 containerd[1447]: time="2024-10-09T01:07:24.657845032Z" level=error msg="StopPodSandbox for \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\" failed" error="failed to destroy network for sandbox \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:24.658660 kubelet[2623]: E1009 01:07:24.658622 2623 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Oct 9 01:07:24.658707 kubelet[2623]: E1009 01:07:24.658668 2623 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12"} Oct 9 01:07:24.658707 kubelet[2623]: E1009 01:07:24.658693 2623 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"365f47e1-51b7-479f-b333-b53fb198b0fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:07:24.658788 kubelet[2623]: E1009 01:07:24.658715 2623 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"365f47e1-51b7-479f-b333-b53fb198b0fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hztch" podUID="365f47e1-51b7-479f-b333-b53fb198b0fd" Oct 9 01:07:24.664713 containerd[1447]: time="2024-10-09T01:07:24.664669338Z" level=error msg="StopPodSandbox for \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\" failed" error="failed to destroy network for sandbox \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:24.664895 kubelet[2623]: E1009 01:07:24.664845 2623 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Oct 9 01:07:24.664895 kubelet[2623]: E1009 01:07:24.664885 2623 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a"} Oct 9 01:07:24.664992 kubelet[2623]: E1009 01:07:24.664912 2623 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"62fb3473-3704-419a-9a60-2f9f5f1e2c3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:07:24.664992 kubelet[2623]: E1009 01:07:24.664931 2623 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"62fb3473-3704-419a-9a60-2f9f5f1e2c3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h2g6g" podUID="62fb3473-3704-419a-9a60-2f9f5f1e2c3b" Oct 9 01:07:27.086693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1508929112.mount: Deactivated successfully. Oct 9 01:07:27.204428 containerd[1447]: time="2024-10-09T01:07:27.204365967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:27.204920 containerd[1447]: time="2024-10-09T01:07:27.204875582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 9 01:07:27.205646 containerd[1447]: time="2024-10-09T01:07:27.205605445Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:27.211696 containerd[1447]: time="2024-10-09T01:07:27.211658549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:27.212464 containerd[1447]: time="2024-10-09T01:07:27.212427972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 3.588851658s" Oct 9 01:07:27.212464 containerd[1447]: time="2024-10-09T01:07:27.212460813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 9 01:07:27.229153 containerd[1447]: time="2024-10-09T01:07:27.229062638Z" level=info msg="CreateContainer within sandbox \"3413e01305e1316fad99c14e2934e2203a7c11c9280b164d238582e9ba4065ef\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 01:07:27.243109 containerd[1447]: time="2024-10-09T01:07:27.243048383Z" level=info msg="CreateContainer within sandbox \"3413e01305e1316fad99c14e2934e2203a7c11c9280b164d238582e9ba4065ef\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3dd22b0ae75270ea98cb85216ee015ede448b5ef8d9ac14ff064a784d1be6585\"" Oct 9 01:07:27.244515 containerd[1447]: time="2024-10-09T01:07:27.244468386Z" level=info msg="StartContainer for \"3dd22b0ae75270ea98cb85216ee015ede448b5ef8d9ac14ff064a784d1be6585\"" Oct 9 01:07:27.305992 systemd[1]: Started cri-containerd-3dd22b0ae75270ea98cb85216ee015ede448b5ef8d9ac14ff064a784d1be6585.scope - libcontainer container 3dd22b0ae75270ea98cb85216ee015ede448b5ef8d9ac14ff064a784d1be6585. Oct 9 01:07:27.332531 containerd[1447]: time="2024-10-09T01:07:27.332285215Z" level=info msg="StartContainer for \"3dd22b0ae75270ea98cb85216ee015ede448b5ef8d9ac14ff064a784d1be6585\" returns successfully" Oct 9 01:07:27.487978 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 01:07:27.488093 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 01:07:27.631409 kubelet[2623]: E1009 01:07:27.631328 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:27.656356 kubelet[2623]: I1009 01:07:27.656276 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xvd9g" podStartSLOduration=2.04486 podStartE2EDuration="11.656258904s" podCreationTimestamp="2024-10-09 01:07:16 +0000 UTC" firstStartedPulling="2024-10-09 01:07:17.601691008 +0000 UTC m=+30.183208813" lastFinishedPulling="2024-10-09 01:07:27.213089912 +0000 UTC m=+39.794607717" observedRunningTime="2024-10-09 01:07:27.655676846 +0000 UTC m=+40.237194611" watchObservedRunningTime="2024-10-09 01:07:27.656258904 +0000 UTC m=+40.237776709" Oct 9 01:07:29.257405 systemd[1]: Started sshd@9-10.0.0.142:22-10.0.0.1:48724.service - OpenSSH per-connection server daemon (10.0.0.1:48724). Oct 9 01:07:29.298471 kubelet[2623]: I1009 01:07:29.297963 2623 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:07:29.298826 kubelet[2623]: E1009 01:07:29.298679 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:29.323211 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 48724 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:07:29.325016 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:29.329671 systemd-logind[1432]: New session 10 of user core. Oct 9 01:07:29.336589 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 01:07:29.429074 systemd[1]: run-containerd-runc-k8s.io-3dd22b0ae75270ea98cb85216ee015ede448b5ef8d9ac14ff064a784d1be6585-runc.QyYFA8.mount: Deactivated successfully. Oct 9 01:07:29.479012 sshd[4159]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:29.490424 systemd[1]: sshd@9-10.0.0.142:22-10.0.0.1:48724.service: Deactivated successfully. Oct 9 01:07:29.492079 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 01:07:29.493528 systemd-logind[1432]: Session 10 logged out. Waiting for processes to exit. Oct 9 01:07:29.498159 systemd[1]: Started sshd@10-10.0.0.142:22-10.0.0.1:48738.service - OpenSSH per-connection server daemon (10.0.0.1:48738). Oct 9 01:07:29.499921 systemd-logind[1432]: Removed session 10. Oct 9 01:07:29.534437 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 48738 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:07:29.535701 sshd[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:29.539810 systemd-logind[1432]: New session 11 of user core. Oct 9 01:07:29.546886 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 01:07:29.697203 sshd[4223]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:29.703799 systemd[1]: sshd@10-10.0.0.142:22-10.0.0.1:48738.service: Deactivated successfully. Oct 9 01:07:29.705609 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 01:07:29.706959 systemd-logind[1432]: Session 11 logged out. Waiting for processes to exit. Oct 9 01:07:29.708680 systemd[1]: Started sshd@11-10.0.0.142:22-10.0.0.1:48742.service - OpenSSH per-connection server daemon (10.0.0.1:48742). Oct 9 01:07:29.714467 systemd-logind[1432]: Removed session 11. Oct 9 01:07:29.757115 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 48742 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:07:29.758535 sshd[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:29.765129 systemd-logind[1432]: New session 12 of user core. Oct 9 01:07:29.775906 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 01:07:29.896694 sshd[4235]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:29.900371 systemd[1]: sshd@11-10.0.0.142:22-10.0.0.1:48742.service: Deactivated successfully. Oct 9 01:07:29.902074 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 01:07:29.902625 systemd-logind[1432]: Session 12 logged out. Waiting for processes to exit. Oct 9 01:07:29.903961 systemd-logind[1432]: Removed session 12. Oct 9 01:07:30.632166 kubelet[2623]: I1009 01:07:30.631977 2623 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:07:30.632921 kubelet[2623]: E1009 01:07:30.632901 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:30.640304 kubelet[2623]: E1009 01:07:30.639896 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:31.138771 kernel: bpftool[4337]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 01:07:31.316874 systemd-networkd[1389]: vxlan.calico: Link UP Oct 9 01:07:31.316882 systemd-networkd[1389]: vxlan.calico: Gained carrier Oct 9 01:07:32.509897 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Oct 9 01:07:34.907846 systemd[1]: Started sshd@12-10.0.0.142:22-10.0.0.1:36314.service - OpenSSH per-connection server daemon (10.0.0.1:36314). Oct 9 01:07:34.962318 sshd[4416]: Accepted publickey for core from 10.0.0.1 port 36314 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:07:34.963001 sshd[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:34.972767 systemd-logind[1432]: New session 13 of user core. Oct 9 01:07:34.998142 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 01:07:35.159527 sshd[4416]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:35.169313 systemd[1]: sshd@12-10.0.0.142:22-10.0.0.1:36314.service: Deactivated successfully. Oct 9 01:07:35.171265 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 01:07:35.172880 systemd-logind[1432]: Session 13 logged out. Waiting for processes to exit. Oct 9 01:07:35.176471 systemd[1]: Started sshd@13-10.0.0.142:22-10.0.0.1:36330.service - OpenSSH per-connection server daemon (10.0.0.1:36330). Oct 9 01:07:35.179807 systemd-logind[1432]: Removed session 13. Oct 9 01:07:35.218861 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 36330 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:07:35.220515 sshd[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:35.224334 systemd-logind[1432]: New session 14 of user core. Oct 9 01:07:35.237903 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 01:07:35.515458 sshd[4431]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:35.523494 systemd[1]: sshd@13-10.0.0.142:22-10.0.0.1:36330.service: Deactivated successfully. Oct 9 01:07:35.525709 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 01:07:35.527249 systemd-logind[1432]: Session 14 logged out. Waiting for processes to exit. Oct 9 01:07:35.529204 systemd[1]: Started sshd@14-10.0.0.142:22-10.0.0.1:36336.service - OpenSSH per-connection server daemon (10.0.0.1:36336). Oct 9 01:07:35.530482 systemd-logind[1432]: Removed session 14. Oct 9 01:07:35.572430 sshd[4445]: Accepted publickey for core from 10.0.0.1 port 36336 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:07:35.574169 sshd[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:35.578977 systemd-logind[1432]: New session 15 of user core. Oct 9 01:07:35.591886 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 01:07:36.508189 containerd[1447]: time="2024-10-09T01:07:36.508142279Z" level=info msg="StopPodSandbox for \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\"" Oct 9 01:07:36.794626 containerd[1447]: 2024-10-09 01:07:36.636 [INFO][4476] k8s.go 608: Cleaning up netns ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Oct 9 01:07:36.794626 containerd[1447]: 2024-10-09 01:07:36.636 [INFO][4476] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" iface="eth0" netns="/var/run/netns/cni-26bcef1d-e9ac-a4ca-d6f5-8c85aa6ec779" Oct 9 01:07:36.794626 containerd[1447]: 2024-10-09 01:07:36.637 [INFO][4476] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" iface="eth0" netns="/var/run/netns/cni-26bcef1d-e9ac-a4ca-d6f5-8c85aa6ec779" Oct 9 01:07:36.794626 containerd[1447]: 2024-10-09 01:07:36.638 [INFO][4476] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" iface="eth0" netns="/var/run/netns/cni-26bcef1d-e9ac-a4ca-d6f5-8c85aa6ec779" Oct 9 01:07:36.794626 containerd[1447]: 2024-10-09 01:07:36.638 [INFO][4476] k8s.go 615: Releasing IP address(es) ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Oct 9 01:07:36.794626 containerd[1447]: 2024-10-09 01:07:36.638 [INFO][4476] utils.go 188: Calico CNI releasing IP address ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Oct 9 01:07:36.794626 containerd[1447]: 2024-10-09 01:07:36.777 [INFO][4492] ipam_plugin.go 417: Releasing address using handleID ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" HandleID="k8s-pod-network.48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Workload="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:36.794626 containerd[1447]: 2024-10-09 01:07:36.778 [INFO][4492] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:36.794626 containerd[1447]: 2024-10-09 01:07:36.778 [INFO][4492] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:36.794626 containerd[1447]: 2024-10-09 01:07:36.788 [WARNING][4492] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" HandleID="k8s-pod-network.48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Workload="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:36.794626 containerd[1447]: 2024-10-09 01:07:36.788 [INFO][4492] ipam_plugin.go 445: Releasing address using workloadID ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" HandleID="k8s-pod-network.48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Workload="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:36.794626 containerd[1447]: 2024-10-09 01:07:36.791 [INFO][4492] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:36.794626 containerd[1447]: 2024-10-09 01:07:36.793 [INFO][4476] k8s.go 621: Teardown processing complete. ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Oct 9 01:07:36.795019 containerd[1447]: time="2024-10-09T01:07:36.794839028Z" level=info msg="TearDown network for sandbox \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\" successfully" Oct 9 01:07:36.795019 containerd[1447]: time="2024-10-09T01:07:36.794868109Z" level=info msg="StopPodSandbox for \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\" returns successfully" Oct 9 01:07:36.797490 systemd[1]: run-netns-cni\x2d26bcef1d\x2de9ac\x2da4ca\x2dd6f5\x2d8c85aa6ec779.mount: Deactivated successfully. Oct 9 01:07:36.800727 containerd[1447]: time="2024-10-09T01:07:36.800692094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t6frj,Uid:72a149ef-a469-42aa-b8b7-4b018e2ec3a1,Namespace:calico-system,Attempt:1,}" Oct 9 01:07:36.928537 systemd-networkd[1389]: cali09e912a581a: Link UP Oct 9 01:07:36.928807 systemd-networkd[1389]: cali09e912a581a: Gained carrier Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.860 [INFO][4502] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--t6frj-eth0 csi-node-driver- calico-system 72a149ef-a469-42aa-b8b7-4b018e2ec3a1 928 0 2024-10-09 01:07:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-t6frj eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali09e912a581a [] []}} ContainerID="bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" Namespace="calico-system" Pod="csi-node-driver-t6frj" WorkloadEndpoint="localhost-k8s-csi--node--driver--t6frj-" Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.860 [INFO][4502] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" Namespace="calico-system" Pod="csi-node-driver-t6frj" WorkloadEndpoint="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.886 [INFO][4514] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" HandleID="k8s-pod-network.bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" Workload="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.897 [INFO][4514] ipam_plugin.go 270: Auto assigning IP ContainerID="bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" HandleID="k8s-pod-network.bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" Workload="localhost-k8s-csi--node--driver--t6frj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001fbe20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-t6frj", "timestamp":"2024-10-09 01:07:36.88673112 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.897 [INFO][4514] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.897 [INFO][4514] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.897 [INFO][4514] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.899 [INFO][4514] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" host="localhost" Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.904 [INFO][4514] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.908 [INFO][4514] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.910 [INFO][4514] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.912 [INFO][4514] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.912 [INFO][4514] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" host="localhost" Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.913 [INFO][4514] ipam.go 1685: Creating new handle: k8s-pod-network.bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543 Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.917 [INFO][4514] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" host="localhost" Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.923 [INFO][4514] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" host="localhost" Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.923 [INFO][4514] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" host="localhost" Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.923 [INFO][4514] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:36.948457 containerd[1447]: 2024-10-09 01:07:36.923 [INFO][4514] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" HandleID="k8s-pod-network.bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" Workload="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:36.949004 containerd[1447]: 2024-10-09 01:07:36.925 [INFO][4502] k8s.go 386: Populated endpoint ContainerID="bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" Namespace="calico-system" Pod="csi-node-driver-t6frj" WorkloadEndpoint="localhost-k8s-csi--node--driver--t6frj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t6frj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72a149ef-a469-42aa-b8b7-4b018e2ec3a1", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-t6frj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali09e912a581a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:36.949004 containerd[1447]: 2024-10-09 01:07:36.925 [INFO][4502] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" Namespace="calico-system" Pod="csi-node-driver-t6frj" WorkloadEndpoint="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:36.949004 containerd[1447]: 2024-10-09 01:07:36.925 [INFO][4502] dataplane_linux.go 68: Setting the host side veth name to cali09e912a581a ContainerID="bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" Namespace="calico-system" Pod="csi-node-driver-t6frj" WorkloadEndpoint="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:36.949004 containerd[1447]: 2024-10-09 01:07:36.927 [INFO][4502] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" Namespace="calico-system" Pod="csi-node-driver-t6frj" WorkloadEndpoint="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:36.949004 containerd[1447]: 2024-10-09 01:07:36.927 [INFO][4502] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" Namespace="calico-system" Pod="csi-node-driver-t6frj" WorkloadEndpoint="localhost-k8s-csi--node--driver--t6frj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t6frj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72a149ef-a469-42aa-b8b7-4b018e2ec3a1", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543", Pod:"csi-node-driver-t6frj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali09e912a581a", MAC:"72:e1:52:86:32:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:36.949004 containerd[1447]: 2024-10-09 01:07:36.943 [INFO][4502] k8s.go 500: Wrote updated endpoint to datastore ContainerID="bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543" Namespace="calico-system" Pod="csi-node-driver-t6frj" WorkloadEndpoint="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:36.973174 containerd[1447]: time="2024-10-09T01:07:36.973080753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:36.973174 containerd[1447]: time="2024-10-09T01:07:36.973142275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:36.973174 containerd[1447]: time="2024-10-09T01:07:36.973152835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:36.973380 containerd[1447]: time="2024-10-09T01:07:36.973226277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:37.020921 systemd[1]: Started cri-containerd-bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543.scope - libcontainer container bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543. Oct 9 01:07:37.030804 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:07:37.045106 containerd[1447]: time="2024-10-09T01:07:37.045012888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t6frj,Uid:72a149ef-a469-42aa-b8b7-4b018e2ec3a1,Namespace:calico-system,Attempt:1,} returns sandbox id \"bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543\"" Oct 9 01:07:37.047879 containerd[1447]: time="2024-10-09T01:07:37.047542510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 01:07:37.465088 sshd[4445]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:37.472806 systemd[1]: sshd@14-10.0.0.142:22-10.0.0.1:36336.service: Deactivated successfully. Oct 9 01:07:37.478158 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 01:07:37.480158 systemd-logind[1432]: Session 15 logged out. Waiting for processes to exit. Oct 9 01:07:37.488244 systemd[1]: Started sshd@15-10.0.0.142:22-10.0.0.1:36350.service - OpenSSH per-connection server daemon (10.0.0.1:36350). Oct 9 01:07:37.492002 systemd-logind[1432]: Removed session 15. Oct 9 01:07:37.507645 containerd[1447]: time="2024-10-09T01:07:37.507605663Z" level=info msg="StopPodSandbox for \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\"" Oct 9 01:07:37.529176 sshd[4582]: Accepted publickey for core from 10.0.0.1 port 36350 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:07:37.530603 sshd[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:37.536628 systemd-logind[1432]: New session 16 of user core. Oct 9 01:07:37.550817 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 01:07:37.613888 containerd[1447]: 2024-10-09 01:07:37.572 [INFO][4602] k8s.go 608: Cleaning up netns ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Oct 9 01:07:37.613888 containerd[1447]: 2024-10-09 01:07:37.572 [INFO][4602] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" iface="eth0" netns="/var/run/netns/cni-1ab58345-fddb-9a48-c263-67e7ef416be1" Oct 9 01:07:37.613888 containerd[1447]: 2024-10-09 01:07:37.572 [INFO][4602] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" iface="eth0" netns="/var/run/netns/cni-1ab58345-fddb-9a48-c263-67e7ef416be1" Oct 9 01:07:37.613888 containerd[1447]: 2024-10-09 01:07:37.572 [INFO][4602] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" iface="eth0" netns="/var/run/netns/cni-1ab58345-fddb-9a48-c263-67e7ef416be1" Oct 9 01:07:37.613888 containerd[1447]: 2024-10-09 01:07:37.573 [INFO][4602] k8s.go 615: Releasing IP address(es) ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Oct 9 01:07:37.613888 containerd[1447]: 2024-10-09 01:07:37.573 [INFO][4602] utils.go 188: Calico CNI releasing IP address ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Oct 9 01:07:37.613888 containerd[1447]: 2024-10-09 01:07:37.596 [INFO][4611] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" HandleID="k8s-pod-network.ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Workload="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:37.613888 containerd[1447]: 2024-10-09 01:07:37.597 [INFO][4611] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:37.613888 containerd[1447]: 2024-10-09 01:07:37.597 [INFO][4611] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:37.613888 containerd[1447]: 2024-10-09 01:07:37.606 [WARNING][4611] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" HandleID="k8s-pod-network.ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Workload="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:37.613888 containerd[1447]: 2024-10-09 01:07:37.606 [INFO][4611] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" HandleID="k8s-pod-network.ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Workload="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:37.613888 containerd[1447]: 2024-10-09 01:07:37.608 [INFO][4611] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:37.613888 containerd[1447]: 2024-10-09 01:07:37.611 [INFO][4602] k8s.go 621: Teardown processing complete. ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Oct 9 01:07:37.614728 containerd[1447]: time="2024-10-09T01:07:37.614272077Z" level=info msg="TearDown network for sandbox \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\" successfully" Oct 9 01:07:37.614728 containerd[1447]: time="2024-10-09T01:07:37.614294437Z" level=info msg="StopPodSandbox for \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\" returns successfully" Oct 9 01:07:37.614887 kubelet[2623]: E1009 01:07:37.614860 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:37.615815 containerd[1447]: time="2024-10-09T01:07:37.615407945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hztch,Uid:365f47e1-51b7-479f-b333-b53fb198b0fd,Namespace:kube-system,Attempt:1,}" Oct 9 01:07:37.616593 systemd[1]: run-netns-cni\x2d1ab58345\x2dfddb\x2d9a48\x2dc263\x2d67e7ef416be1.mount: Deactivated successfully. Oct 9 01:07:37.740278 systemd-networkd[1389]: calie50e5a822c5: Link UP Oct 9 01:07:37.740465 systemd-networkd[1389]: calie50e5a822c5: Gained carrier Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.666 [INFO][4625] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--hztch-eth0 coredns-7db6d8ff4d- kube-system 365f47e1-51b7-479f-b333-b53fb198b0fd 955 0 2024-10-09 01:07:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-hztch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie50e5a822c5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hztch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hztch-" Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.667 [INFO][4625] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hztch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.694 [INFO][4638] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" HandleID="k8s-pod-network.00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" Workload="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.707 [INFO][4638] ipam_plugin.go 270: Auto assigning IP ContainerID="00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" HandleID="k8s-pod-network.00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" Workload="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000301ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-hztch", "timestamp":"2024-10-09 01:07:37.694047352 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.707 [INFO][4638] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.707 [INFO][4638] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.707 [INFO][4638] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.708 [INFO][4638] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" host="localhost" Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.712 [INFO][4638] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.717 [INFO][4638] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.720 [INFO][4638] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.722 [INFO][4638] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.722 [INFO][4638] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" host="localhost" Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.724 [INFO][4638] ipam.go 1685: Creating new handle: k8s-pod-network.00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.727 [INFO][4638] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" host="localhost" Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.733 [INFO][4638] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" host="localhost" Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.733 [INFO][4638] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" host="localhost" Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.733 [INFO][4638] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:37.758029 containerd[1447]: 2024-10-09 01:07:37.733 [INFO][4638] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" HandleID="k8s-pod-network.00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" Workload="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:37.758868 containerd[1447]: 2024-10-09 01:07:37.735 [INFO][4625] k8s.go 386: Populated endpoint ContainerID="00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hztch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hztch-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"365f47e1-51b7-479f-b333-b53fb198b0fd", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-hztch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie50e5a822c5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:37.758868 containerd[1447]: 2024-10-09 01:07:37.735 [INFO][4625] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hztch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:37.758868 containerd[1447]: 2024-10-09 01:07:37.735 [INFO][4625] dataplane_linux.go 68: Setting the host side veth name to calie50e5a822c5 ContainerID="00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hztch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:37.758868 containerd[1447]: 2024-10-09 01:07:37.740 [INFO][4625] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hztch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:37.758868 containerd[1447]: 2024-10-09 01:07:37.741 [INFO][4625] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hztch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hztch-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"365f47e1-51b7-479f-b333-b53fb198b0fd", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b", Pod:"coredns-7db6d8ff4d-hztch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie50e5a822c5", MAC:"3a:1c:e8:64:a5:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:37.758868 containerd[1447]: 2024-10-09 01:07:37.753 [INFO][4625] k8s.go 500: Wrote updated endpoint to datastore ContainerID="00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hztch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:37.787774 containerd[1447]: time="2024-10-09T01:07:37.785061742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:37.787774 containerd[1447]: time="2024-10-09T01:07:37.785124343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:37.787968 containerd[1447]: time="2024-10-09T01:07:37.787777288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:37.788478 containerd[1447]: time="2024-10-09T01:07:37.788362503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:37.814962 systemd[1]: Started cri-containerd-00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b.scope - libcontainer container 00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b. Oct 9 01:07:37.830037 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:07:37.853627 containerd[1447]: time="2024-10-09T01:07:37.853429377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hztch,Uid:365f47e1-51b7-479f-b333-b53fb198b0fd,Namespace:kube-system,Attempt:1,} returns sandbox id \"00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b\"" Oct 9 01:07:37.855143 kubelet[2623]: E1009 01:07:37.855102 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:37.865013 containerd[1447]: time="2024-10-09T01:07:37.863849112Z" level=info msg="CreateContainer within sandbox \"00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:07:37.864990 sshd[4582]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:37.874531 systemd[1]: sshd@15-10.0.0.142:22-10.0.0.1:36350.service: Deactivated successfully. Oct 9 01:07:37.878177 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 01:07:37.885429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985368294.mount: Deactivated successfully. Oct 9 01:07:37.886847 systemd-logind[1432]: Session 16 logged out. Waiting for processes to exit. Oct 9 01:07:37.892070 containerd[1447]: time="2024-10-09T01:07:37.890111076Z" level=info msg="CreateContainer within sandbox \"00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88b8e67d6988b995121d526ef07b700a43f1a21555dcdd1727264423e8eae569\"" Oct 9 01:07:37.893780 containerd[1447]: time="2024-10-09T01:07:37.892934625Z" level=info msg="StartContainer for \"88b8e67d6988b995121d526ef07b700a43f1a21555dcdd1727264423e8eae569\"" Oct 9 01:07:37.897062 systemd[1]: Started sshd@16-10.0.0.142:22-10.0.0.1:36366.service - OpenSSH per-connection server daemon (10.0.0.1:36366). Oct 9 01:07:37.898863 systemd-logind[1432]: Removed session 16. Oct 9 01:07:37.936940 systemd[1]: Started cri-containerd-88b8e67d6988b995121d526ef07b700a43f1a21555dcdd1727264423e8eae569.scope - libcontainer container 88b8e67d6988b995121d526ef07b700a43f1a21555dcdd1727264423e8eae569. Oct 9 01:07:37.942806 sshd[4703]: Accepted publickey for core from 10.0.0.1 port 36366 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:07:37.946382 sshd[4703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:37.962689 systemd-logind[1432]: New session 17 of user core. Oct 9 01:07:37.967932 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 01:07:37.980357 containerd[1447]: time="2024-10-09T01:07:37.979940637Z" level=info msg="StartContainer for \"88b8e67d6988b995121d526ef07b700a43f1a21555dcdd1727264423e8eae569\" returns successfully" Oct 9 01:07:38.053833 containerd[1447]: time="2024-10-09T01:07:38.052970845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:38.053833 containerd[1447]: time="2024-10-09T01:07:38.053692223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 9 01:07:38.054969 containerd[1447]: time="2024-10-09T01:07:38.054934933Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:38.061887 containerd[1447]: time="2024-10-09T01:07:38.061842339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:38.062540 containerd[1447]: time="2024-10-09T01:07:38.062287950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.0147134s" Oct 9 01:07:38.062540 containerd[1447]: time="2024-10-09T01:07:38.062316311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 9 01:07:38.066819 containerd[1447]: time="2024-10-09T01:07:38.066785098Z" level=info msg="CreateContainer within sandbox \"bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 01:07:38.087603 containerd[1447]: time="2024-10-09T01:07:38.087059787Z" level=info msg="CreateContainer within sandbox \"bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"48a963a0ad24996903990662c4208a07dae5d433a0e2ed9bb74034cdab0936b0\"" Oct 9 01:07:38.087787 containerd[1447]: time="2024-10-09T01:07:38.087734363Z" level=info msg="StartContainer for \"48a963a0ad24996903990662c4208a07dae5d433a0e2ed9bb74034cdab0936b0\"" Oct 9 01:07:38.123905 systemd[1]: Started cri-containerd-48a963a0ad24996903990662c4208a07dae5d433a0e2ed9bb74034cdab0936b0.scope - libcontainer container 48a963a0ad24996903990662c4208a07dae5d433a0e2ed9bb74034cdab0936b0. Oct 9 01:07:38.161885 containerd[1447]: time="2024-10-09T01:07:38.161844469Z" level=info msg="StartContainer for \"48a963a0ad24996903990662c4208a07dae5d433a0e2ed9bb74034cdab0936b0\" returns successfully" Oct 9 01:07:38.162865 containerd[1447]: time="2024-10-09T01:07:38.162819852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 01:07:38.177843 sshd[4703]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:38.180810 systemd[1]: sshd@16-10.0.0.142:22-10.0.0.1:36366.service: Deactivated successfully. Oct 9 01:07:38.182406 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 01:07:38.184087 systemd-logind[1432]: Session 17 logged out. Waiting for processes to exit. Oct 9 01:07:38.184977 systemd-logind[1432]: Removed session 17. Oct 9 01:07:38.333892 systemd-networkd[1389]: cali09e912a581a: Gained IPv6LL Oct 9 01:07:38.668225 kubelet[2623]: E1009 01:07:38.668177 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:38.690765 kubelet[2623]: I1009 01:07:38.690592 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hztch" podStartSLOduration=34.69057513 podStartE2EDuration="34.69057513s" podCreationTimestamp="2024-10-09 01:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:07:38.679660307 +0000 UTC m=+51.261178112" watchObservedRunningTime="2024-10-09 01:07:38.69057513 +0000 UTC m=+51.272092935" Oct 9 01:07:39.200502 containerd[1447]: time="2024-10-09T01:07:39.200437340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:39.201181 containerd[1447]: time="2024-10-09T01:07:39.201137717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 9 01:07:39.201792 containerd[1447]: time="2024-10-09T01:07:39.201759852Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:39.207325 containerd[1447]: time="2024-10-09T01:07:39.207272383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:39.208204 containerd[1447]: time="2024-10-09T01:07:39.208165564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.04530843s" Oct 9 01:07:39.210833 containerd[1447]: time="2024-10-09T01:07:39.210797346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 9 01:07:39.215165 containerd[1447]: time="2024-10-09T01:07:39.215042927Z" level=info msg="CreateContainer within sandbox \"bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 01:07:39.231917 systemd-networkd[1389]: calie50e5a822c5: Gained IPv6LL Oct 9 01:07:39.234835 containerd[1447]: time="2024-10-09T01:07:39.234727634Z" level=info msg="CreateContainer within sandbox \"bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ff98ca1132c65235f5595894cdaf35111913d4233e986118c0a16b4f4bf98a98\"" Oct 9 01:07:39.235582 containerd[1447]: time="2024-10-09T01:07:39.235534813Z" level=info msg="StartContainer for \"ff98ca1132c65235f5595894cdaf35111913d4233e986118c0a16b4f4bf98a98\"" Oct 9 01:07:39.271123 systemd[1]: Started cri-containerd-ff98ca1132c65235f5595894cdaf35111913d4233e986118c0a16b4f4bf98a98.scope - libcontainer container ff98ca1132c65235f5595894cdaf35111913d4233e986118c0a16b4f4bf98a98. Oct 9 01:07:39.304920 containerd[1447]: time="2024-10-09T01:07:39.304736294Z" level=info msg="StartContainer for \"ff98ca1132c65235f5595894cdaf35111913d4233e986118c0a16b4f4bf98a98\" returns successfully" Oct 9 01:07:39.508271 containerd[1447]: time="2024-10-09T01:07:39.507385620Z" level=info msg="StopPodSandbox for \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\"" Oct 9 01:07:39.508271 containerd[1447]: time="2024-10-09T01:07:39.507405141Z" level=info msg="StopPodSandbox for \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\"" Oct 9 01:07:39.585350 kubelet[2623]: I1009 01:07:39.585299 2623 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 01:07:39.599347 kubelet[2623]: I1009 01:07:39.599317 2623 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 01:07:39.600853 containerd[1447]: 2024-10-09 01:07:39.555 [INFO][4873] k8s.go 608: Cleaning up netns ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Oct 9 01:07:39.600853 containerd[1447]: 2024-10-09 01:07:39.556 [INFO][4873] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" iface="eth0" netns="/var/run/netns/cni-c0f24221-7674-d503-eb03-e84ef95a40ac" Oct 9 01:07:39.600853 containerd[1447]: 2024-10-09 01:07:39.556 [INFO][4873] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" iface="eth0" netns="/var/run/netns/cni-c0f24221-7674-d503-eb03-e84ef95a40ac" Oct 9 01:07:39.600853 containerd[1447]: 2024-10-09 01:07:39.557 [INFO][4873] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" iface="eth0" netns="/var/run/netns/cni-c0f24221-7674-d503-eb03-e84ef95a40ac" Oct 9 01:07:39.600853 containerd[1447]: 2024-10-09 01:07:39.557 [INFO][4873] k8s.go 615: Releasing IP address(es) ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Oct 9 01:07:39.600853 containerd[1447]: 2024-10-09 01:07:39.557 [INFO][4873] utils.go 188: Calico CNI releasing IP address ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Oct 9 01:07:39.600853 containerd[1447]: 2024-10-09 01:07:39.584 [INFO][4889] ipam_plugin.go 417: Releasing address using handleID ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" HandleID="k8s-pod-network.c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Workload="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:39.600853 containerd[1447]: 2024-10-09 01:07:39.584 [INFO][4889] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:39.600853 containerd[1447]: 2024-10-09 01:07:39.584 [INFO][4889] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:39.600853 containerd[1447]: 2024-10-09 01:07:39.593 [WARNING][4889] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" HandleID="k8s-pod-network.c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Workload="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:39.600853 containerd[1447]: 2024-10-09 01:07:39.593 [INFO][4889] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" HandleID="k8s-pod-network.c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Workload="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:39.600853 containerd[1447]: 2024-10-09 01:07:39.594 [INFO][4889] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:39.600853 containerd[1447]: 2024-10-09 01:07:39.596 [INFO][4873] k8s.go 621: Teardown processing complete. ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Oct 9 01:07:39.602196 containerd[1447]: time="2024-10-09T01:07:39.602163668Z" level=info msg="TearDown network for sandbox \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\" successfully" Oct 9 01:07:39.602196 containerd[1447]: time="2024-10-09T01:07:39.602196429Z" level=info msg="StopPodSandbox for \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\" returns successfully" Oct 9 01:07:39.602488 kubelet[2623]: E1009 01:07:39.602463 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:39.603828 containerd[1447]: time="2024-10-09T01:07:39.603605422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2g6g,Uid:62fb3473-3704-419a-9a60-2f9f5f1e2c3b,Namespace:kube-system,Attempt:1,}" Oct 9 01:07:39.614872 containerd[1447]: 2024-10-09 01:07:39.558 [INFO][4874] k8s.go 608: Cleaning up netns ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Oct 9 01:07:39.614872 containerd[1447]: 2024-10-09 01:07:39.558 [INFO][4874] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" iface="eth0" netns="/var/run/netns/cni-f6777044-2997-c9d6-b764-d8543e9e5635" Oct 9 01:07:39.614872 containerd[1447]: 2024-10-09 01:07:39.558 [INFO][4874] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" iface="eth0" netns="/var/run/netns/cni-f6777044-2997-c9d6-b764-d8543e9e5635" Oct 9 01:07:39.614872 containerd[1447]: 2024-10-09 01:07:39.558 [INFO][4874] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" iface="eth0" netns="/var/run/netns/cni-f6777044-2997-c9d6-b764-d8543e9e5635" Oct 9 01:07:39.614872 containerd[1447]: 2024-10-09 01:07:39.558 [INFO][4874] k8s.go 615: Releasing IP address(es) ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Oct 9 01:07:39.614872 containerd[1447]: 2024-10-09 01:07:39.558 [INFO][4874] utils.go 188: Calico CNI releasing IP address ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Oct 9 01:07:39.614872 containerd[1447]: 2024-10-09 01:07:39.585 [INFO][4890] ipam_plugin.go 417: Releasing address using handleID ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" HandleID="k8s-pod-network.a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Workload="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:39.614872 containerd[1447]: 2024-10-09 01:07:39.586 [INFO][4890] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:39.614872 containerd[1447]: 2024-10-09 01:07:39.594 [INFO][4890] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:39.614872 containerd[1447]: 2024-10-09 01:07:39.608 [WARNING][4890] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" HandleID="k8s-pod-network.a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Workload="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:39.614872 containerd[1447]: 2024-10-09 01:07:39.608 [INFO][4890] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" HandleID="k8s-pod-network.a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Workload="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:39.614872 containerd[1447]: 2024-10-09 01:07:39.610 [INFO][4890] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:39.614872 containerd[1447]: 2024-10-09 01:07:39.612 [INFO][4874] k8s.go 621: Teardown processing complete. ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Oct 9 01:07:39.615232 containerd[1447]: time="2024-10-09T01:07:39.615029013Z" level=info msg="TearDown network for sandbox \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\" successfully" Oct 9 01:07:39.615232 containerd[1447]: time="2024-10-09T01:07:39.615053974Z" level=info msg="StopPodSandbox for \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\" returns successfully" Oct 9 01:07:39.616378 containerd[1447]: time="2024-10-09T01:07:39.615575466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c6f8c56b-smw77,Uid:6e9dceb0-a563-4980-b699-4a2e937ee8e7,Namespace:calico-system,Attempt:1,}" Oct 9 01:07:39.676280 kubelet[2623]: E1009 01:07:39.676248 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:39.686927 kubelet[2623]: I1009 01:07:39.686645 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t6frj" podStartSLOduration=26.52149875 podStartE2EDuration="28.686628791s" podCreationTimestamp="2024-10-09 01:07:11 +0000 UTC" firstStartedPulling="2024-10-09 01:07:37.047297024 +0000 UTC m=+49.628814829" lastFinishedPulling="2024-10-09 01:07:39.212427105 +0000 UTC m=+51.793944870" observedRunningTime="2024-10-09 01:07:39.686400386 +0000 UTC m=+52.267918191" watchObservedRunningTime="2024-10-09 01:07:39.686628791 +0000 UTC m=+52.268146596" Oct 9 01:07:39.747602 systemd-networkd[1389]: cali55861a12d54: Link UP Oct 9 01:07:39.748385 systemd-networkd[1389]: cali55861a12d54: Gained carrier Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.664 [INFO][4905] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0 coredns-7db6d8ff4d- kube-system 62fb3473-3704-419a-9a60-2f9f5f1e2c3b 1004 0 2024-10-09 01:07:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-h2g6g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali55861a12d54 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h2g6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h2g6g-" Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.664 [INFO][4905] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h2g6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.702 [INFO][4940] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" HandleID="k8s-pod-network.cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" Workload="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.714 [INFO][4940] ipam_plugin.go 270: Auto assigning IP ContainerID="cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" HandleID="k8s-pod-network.cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" Workload="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400070e6f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-h2g6g", "timestamp":"2024-10-09 01:07:39.70259765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.714 [INFO][4940] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.714 [INFO][4940] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.714 [INFO][4940] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.716 [INFO][4940] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" host="localhost" Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.720 [INFO][4940] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.724 [INFO][4940] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.726 [INFO][4940] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.728 [INFO][4940] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.728 [INFO][4940] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" host="localhost" Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.731 [INFO][4940] ipam.go 1685: Creating new handle: k8s-pod-network.cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3 Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.734 [INFO][4940] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" host="localhost" Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.741 [INFO][4940] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" host="localhost" Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.741 [INFO][4940] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" host="localhost" Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.741 [INFO][4940] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:39.763616 containerd[1447]: 2024-10-09 01:07:39.741 [INFO][4940] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" HandleID="k8s-pod-network.cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" Workload="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:39.764137 containerd[1447]: 2024-10-09 01:07:39.744 [INFO][4905] k8s.go 386: Populated endpoint ContainerID="cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h2g6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"62fb3473-3704-419a-9a60-2f9f5f1e2c3b", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-h2g6g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55861a12d54", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:39.764137 containerd[1447]: 2024-10-09 01:07:39.745 [INFO][4905] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h2g6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:39.764137 containerd[1447]: 2024-10-09 01:07:39.745 [INFO][4905] dataplane_linux.go 68: Setting the host side veth name to cali55861a12d54 ContainerID="cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h2g6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:39.764137 containerd[1447]: 2024-10-09 01:07:39.749 [INFO][4905] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h2g6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:39.764137 containerd[1447]: 2024-10-09 01:07:39.749 [INFO][4905] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h2g6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"62fb3473-3704-419a-9a60-2f9f5f1e2c3b", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3", Pod:"coredns-7db6d8ff4d-h2g6g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55861a12d54", MAC:"aa:07:3f:ac:f7:8f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:39.764137 containerd[1447]: 2024-10-09 01:07:39.759 [INFO][4905] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h2g6g" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:39.785618 containerd[1447]: time="2024-10-09T01:07:39.785265451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:39.788133 containerd[1447]: time="2024-10-09T01:07:39.787917473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:39.788133 containerd[1447]: time="2024-10-09T01:07:39.787947194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:39.788133 containerd[1447]: time="2024-10-09T01:07:39.788057957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:39.790820 systemd-networkd[1389]: cali75aec8976aa: Link UP Oct 9 01:07:39.790976 systemd-networkd[1389]: cali75aec8976aa: Gained carrier Oct 9 01:07:39.800049 systemd[1]: run-netns-cni\x2df6777044\x2d2997\x2dc9d6\x2db764\x2dd8543e9e5635.mount: Deactivated successfully. Oct 9 01:07:39.800139 systemd[1]: run-netns-cni\x2dc0f24221\x2d7674\x2dd503\x2deb03\x2de84ef95a40ac.mount: Deactivated successfully. Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.667 [INFO][4916] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0 calico-kube-controllers-8c6f8c56b- calico-system 6e9dceb0-a563-4980-b699-4a2e937ee8e7 1005 0 2024-10-09 01:07:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8c6f8c56b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8c6f8c56b-smw77 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali75aec8976aa [] []}} ContainerID="fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" Namespace="calico-system" Pod="calico-kube-controllers-8c6f8c56b-smw77" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-" Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.667 [INFO][4916] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" Namespace="calico-system" Pod="calico-kube-controllers-8c6f8c56b-smw77" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.703 [INFO][4935] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" HandleID="k8s-pod-network.fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" Workload="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.718 [INFO][4935] ipam_plugin.go 270: Auto assigning IP ContainerID="fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" HandleID="k8s-pod-network.fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" Workload="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f3910), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8c6f8c56b-smw77", "timestamp":"2024-10-09 01:07:39.703167743 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.718 [INFO][4935] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.741 [INFO][4935] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.741 [INFO][4935] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.743 [INFO][4935] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" host="localhost" Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.754 [INFO][4935] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.760 [INFO][4935] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.766 [INFO][4935] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.770 [INFO][4935] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.770 [INFO][4935] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" host="localhost" Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.772 [INFO][4935] ipam.go 1685: Creating new handle: k8s-pod-network.fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.778 [INFO][4935] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" host="localhost" Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.785 [INFO][4935] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" host="localhost" Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.785 [INFO][4935] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" host="localhost" Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.785 [INFO][4935] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:39.811890 containerd[1447]: 2024-10-09 01:07:39.785 [INFO][4935] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" HandleID="k8s-pod-network.fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" Workload="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:39.812483 containerd[1447]: 2024-10-09 01:07:39.789 [INFO][4916] k8s.go 386: Populated endpoint ContainerID="fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" Namespace="calico-system" Pod="calico-kube-controllers-8c6f8c56b-smw77" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0", GenerateName:"calico-kube-controllers-8c6f8c56b-", Namespace:"calico-system", SelfLink:"", UID:"6e9dceb0-a563-4980-b699-4a2e937ee8e7", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8c6f8c56b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8c6f8c56b-smw77", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali75aec8976aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:39.812483 containerd[1447]: 2024-10-09 01:07:39.789 [INFO][4916] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" Namespace="calico-system" Pod="calico-kube-controllers-8c6f8c56b-smw77" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:39.812483 containerd[1447]: 2024-10-09 01:07:39.789 [INFO][4916] dataplane_linux.go 68: Setting the host side veth name to cali75aec8976aa ContainerID="fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" Namespace="calico-system" Pod="calico-kube-controllers-8c6f8c56b-smw77" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:39.812483 containerd[1447]: 2024-10-09 01:07:39.791 [INFO][4916] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" Namespace="calico-system" Pod="calico-kube-controllers-8c6f8c56b-smw77" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:39.812483 containerd[1447]: 2024-10-09 01:07:39.792 [INFO][4916] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" Namespace="calico-system" Pod="calico-kube-controllers-8c6f8c56b-smw77" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0", GenerateName:"calico-kube-controllers-8c6f8c56b-", Namespace:"calico-system", SelfLink:"", UID:"6e9dceb0-a563-4980-b699-4a2e937ee8e7", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8c6f8c56b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e", Pod:"calico-kube-controllers-8c6f8c56b-smw77", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali75aec8976aa", MAC:"4a:3f:5a:4d:28:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:39.812483 containerd[1447]: 2024-10-09 01:07:39.806 [INFO][4916] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e" Namespace="calico-system" Pod="calico-kube-controllers-8c6f8c56b-smw77" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:39.817930 systemd[1]: Started cri-containerd-cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3.scope - libcontainer container cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3. Oct 9 01:07:39.829131 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:07:39.834187 containerd[1447]: time="2024-10-09T01:07:39.834109129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:39.834187 containerd[1447]: time="2024-10-09T01:07:39.834156410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:39.834187 containerd[1447]: time="2024-10-09T01:07:39.834166690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:39.834355 containerd[1447]: time="2024-10-09T01:07:39.834310374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:39.853354 containerd[1447]: time="2024-10-09T01:07:39.853320345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h2g6g,Uid:62fb3473-3704-419a-9a60-2f9f5f1e2c3b,Namespace:kube-system,Attempt:1,} returns sandbox id \"cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3\"" Oct 9 01:07:39.854399 kubelet[2623]: E1009 01:07:39.854375 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:39.857056 containerd[1447]: time="2024-10-09T01:07:39.857008072Z" level=info msg="CreateContainer within sandbox \"cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:07:39.857169 systemd[1]: Started cri-containerd-fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e.scope - libcontainer container fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e. Oct 9 01:07:39.869136 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:07:39.874063 containerd[1447]: time="2024-10-09T01:07:39.874025796Z" level=info msg="CreateContainer within sandbox \"cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6a4f1d208db0f8a04e4b26374e71d02ac527a411beeea64fddcf015da316e77\"" Oct 9 01:07:39.876579 containerd[1447]: time="2024-10-09T01:07:39.874649690Z" level=info msg="StartContainer for \"f6a4f1d208db0f8a04e4b26374e71d02ac527a411beeea64fddcf015da316e77\"" Oct 9 01:07:39.890498 containerd[1447]: time="2024-10-09T01:07:39.890449345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8c6f8c56b-smw77,Uid:6e9dceb0-a563-4980-b699-4a2e937ee8e7,Namespace:calico-system,Attempt:1,} returns sandbox id \"fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e\"" Oct 9 01:07:39.892140 containerd[1447]: time="2024-10-09T01:07:39.892057823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 01:07:39.908997 systemd[1]: Started cri-containerd-f6a4f1d208db0f8a04e4b26374e71d02ac527a411beeea64fddcf015da316e77.scope - libcontainer container f6a4f1d208db0f8a04e4b26374e71d02ac527a411beeea64fddcf015da316e77. Oct 9 01:07:39.942051 containerd[1447]: time="2024-10-09T01:07:39.942008848Z" level=info msg="StartContainer for \"f6a4f1d208db0f8a04e4b26374e71d02ac527a411beeea64fddcf015da316e77\" returns successfully" Oct 9 01:07:40.681092 kubelet[2623]: E1009 01:07:40.681008 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:40.681418 kubelet[2623]: E1009 01:07:40.681202 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:40.715714 kubelet[2623]: I1009 01:07:40.715541 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-h2g6g" podStartSLOduration=36.715526178 podStartE2EDuration="36.715526178s" podCreationTimestamp="2024-10-09 01:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:07:40.690931683 +0000 UTC m=+53.272449488" watchObservedRunningTime="2024-10-09 01:07:40.715526178 +0000 UTC m=+53.297043983" Oct 9 01:07:40.894914 systemd-networkd[1389]: cali75aec8976aa: Gained IPv6LL Oct 9 01:07:41.455437 containerd[1447]: time="2024-10-09T01:07:41.454731573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:41.456021 containerd[1447]: time="2024-10-09T01:07:41.455981162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 9 01:07:41.456146 containerd[1447]: time="2024-10-09T01:07:41.456126365Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:41.458899 containerd[1447]: time="2024-10-09T01:07:41.458871389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:41.459872 containerd[1447]: time="2024-10-09T01:07:41.459760129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 1.567653024s" Oct 9 01:07:41.459872 containerd[1447]: time="2024-10-09T01:07:41.459792210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 9 01:07:41.467996 containerd[1447]: time="2024-10-09T01:07:41.467795434Z" level=info msg="CreateContainer within sandbox \"fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 01:07:41.469924 systemd-networkd[1389]: cali55861a12d54: Gained IPv6LL Oct 9 01:07:41.480914 containerd[1447]: time="2024-10-09T01:07:41.480881215Z" level=info msg="CreateContainer within sandbox \"fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8e5c44868a1ddc58c1aba049528522b66ced5e1e5ad56092c17169f12f1058e0\"" Oct 9 01:07:41.481275 containerd[1447]: time="2024-10-09T01:07:41.481250784Z" level=info msg="StartContainer for \"8e5c44868a1ddc58c1aba049528522b66ced5e1e5ad56092c17169f12f1058e0\"" Oct 9 01:07:41.514929 systemd[1]: Started cri-containerd-8e5c44868a1ddc58c1aba049528522b66ced5e1e5ad56092c17169f12f1058e0.scope - libcontainer container 8e5c44868a1ddc58c1aba049528522b66ced5e1e5ad56092c17169f12f1058e0. Oct 9 01:07:41.549100 containerd[1447]: time="2024-10-09T01:07:41.549053905Z" level=info msg="StartContainer for \"8e5c44868a1ddc58c1aba049528522b66ced5e1e5ad56092c17169f12f1058e0\" returns successfully" Oct 9 01:07:41.691469 kubelet[2623]: E1009 01:07:41.691416 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:41.702250 kubelet[2623]: I1009 01:07:41.702181 2623 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8c6f8c56b-smw77" podStartSLOduration=28.133228975 podStartE2EDuration="29.70216427s" podCreationTimestamp="2024-10-09 01:07:12 +0000 UTC" firstStartedPulling="2024-10-09 01:07:39.891638533 +0000 UTC m=+52.473156338" lastFinishedPulling="2024-10-09 01:07:41.460573828 +0000 UTC m=+54.042091633" observedRunningTime="2024-10-09 01:07:41.70213707 +0000 UTC m=+54.283654875" watchObservedRunningTime="2024-10-09 01:07:41.70216427 +0000 UTC m=+54.283682075" Oct 9 01:07:42.693070 kubelet[2623]: E1009 01:07:42.692997 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:07:43.217277 systemd[1]: Started sshd@17-10.0.0.142:22-10.0.0.1:44208.service - OpenSSH per-connection server daemon (10.0.0.1:44208). Oct 9 01:07:43.271527 sshd[5182]: Accepted publickey for core from 10.0.0.1 port 44208 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:07:43.273248 sshd[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:43.279413 systemd-logind[1432]: New session 18 of user core. Oct 9 01:07:43.294953 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 01:07:43.503780 sshd[5182]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:43.508061 systemd[1]: sshd@17-10.0.0.142:22-10.0.0.1:44208.service: Deactivated successfully. Oct 9 01:07:43.511459 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 01:07:43.513206 systemd-logind[1432]: Session 18 logged out. Waiting for processes to exit. Oct 9 01:07:43.514039 systemd-logind[1432]: Removed session 18. Oct 9 01:07:47.480371 containerd[1447]: time="2024-10-09T01:07:47.480327083Z" level=info msg="StopPodSandbox for \"6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7\"" Oct 9 01:07:47.480785 containerd[1447]: time="2024-10-09T01:07:47.480431845Z" level=info msg="TearDown network for sandbox \"6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7\" successfully" Oct 9 01:07:47.480785 containerd[1447]: time="2024-10-09T01:07:47.480444485Z" level=info msg="StopPodSandbox for \"6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7\" returns successfully" Oct 9 01:07:47.481274 containerd[1447]: time="2024-10-09T01:07:47.481244582Z" level=info msg="RemovePodSandbox for \"6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7\"" Oct 9 01:07:47.483528 containerd[1447]: time="2024-10-09T01:07:47.483489151Z" level=info msg="Forcibly stopping sandbox \"6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7\"" Oct 9 01:07:47.483595 containerd[1447]: time="2024-10-09T01:07:47.483585433Z" level=info msg="TearDown network for sandbox \"6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7\" successfully" Oct 9 01:07:47.491588 containerd[1447]: time="2024-10-09T01:07:47.491532323Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:07:47.491651 containerd[1447]: time="2024-10-09T01:07:47.491611444Z" level=info msg="RemovePodSandbox \"6c1688362a1b4869a64b0d01804633486d76e883425accb58a58583c276595b7\" returns successfully" Oct 9 01:07:47.492019 containerd[1447]: time="2024-10-09T01:07:47.491979812Z" level=info msg="StopPodSandbox for \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\"" Oct 9 01:07:47.583857 containerd[1447]: 2024-10-09 01:07:47.541 [WARNING][5213] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"62fb3473-3704-419a-9a60-2f9f5f1e2c3b", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3", Pod:"coredns-7db6d8ff4d-h2g6g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55861a12d54", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:47.583857 containerd[1447]: 2024-10-09 01:07:47.542 [INFO][5213] k8s.go 608: Cleaning up netns ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Oct 9 01:07:47.583857 containerd[1447]: 2024-10-09 01:07:47.542 [INFO][5213] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" iface="eth0" netns="" Oct 9 01:07:47.583857 containerd[1447]: 2024-10-09 01:07:47.542 [INFO][5213] k8s.go 615: Releasing IP address(es) ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Oct 9 01:07:47.583857 containerd[1447]: 2024-10-09 01:07:47.542 [INFO][5213] utils.go 188: Calico CNI releasing IP address ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Oct 9 01:07:47.583857 containerd[1447]: 2024-10-09 01:07:47.561 [INFO][5223] ipam_plugin.go 417: Releasing address using handleID ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" HandleID="k8s-pod-network.c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Workload="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:47.583857 containerd[1447]: 2024-10-09 01:07:47.561 [INFO][5223] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:47.583857 containerd[1447]: 2024-10-09 01:07:47.561 [INFO][5223] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:47.583857 containerd[1447]: 2024-10-09 01:07:47.576 [WARNING][5223] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" HandleID="k8s-pod-network.c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Workload="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:47.583857 containerd[1447]: 2024-10-09 01:07:47.577 [INFO][5223] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" HandleID="k8s-pod-network.c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Workload="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:47.583857 containerd[1447]: 2024-10-09 01:07:47.578 [INFO][5223] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:47.583857 containerd[1447]: 2024-10-09 01:07:47.581 [INFO][5213] k8s.go 621: Teardown processing complete. ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Oct 9 01:07:47.584655 containerd[1447]: time="2024-10-09T01:07:47.583893420Z" level=info msg="TearDown network for sandbox \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\" successfully" Oct 9 01:07:47.584655 containerd[1447]: time="2024-10-09T01:07:47.583926781Z" level=info msg="StopPodSandbox for \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\" returns successfully" Oct 9 01:07:47.585021 containerd[1447]: time="2024-10-09T01:07:47.584407111Z" level=info msg="RemovePodSandbox for \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\"" Oct 9 01:07:47.585021 containerd[1447]: time="2024-10-09T01:07:47.584798640Z" level=info msg="Forcibly stopping sandbox \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\"" Oct 9 01:07:47.653010 containerd[1447]: 2024-10-09 01:07:47.617 [WARNING][5246] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"62fb3473-3704-419a-9a60-2f9f5f1e2c3b", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc55b7b6c9ce5cb56fc10259072bd2289be0fb2ef300b65b44bf20345068b6b3", Pod:"coredns-7db6d8ff4d-h2g6g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali55861a12d54", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:47.653010 containerd[1447]: 2024-10-09 01:07:47.617 [INFO][5246] k8s.go 608: Cleaning up netns ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Oct 9 01:07:47.653010 containerd[1447]: 2024-10-09 01:07:47.618 [INFO][5246] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" iface="eth0" netns="" Oct 9 01:07:47.653010 containerd[1447]: 2024-10-09 01:07:47.618 [INFO][5246] k8s.go 615: Releasing IP address(es) ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Oct 9 01:07:47.653010 containerd[1447]: 2024-10-09 01:07:47.618 [INFO][5246] utils.go 188: Calico CNI releasing IP address ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Oct 9 01:07:47.653010 containerd[1447]: 2024-10-09 01:07:47.636 [INFO][5254] ipam_plugin.go 417: Releasing address using handleID ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" HandleID="k8s-pod-network.c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Workload="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:47.653010 containerd[1447]: 2024-10-09 01:07:47.637 [INFO][5254] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:47.653010 containerd[1447]: 2024-10-09 01:07:47.637 [INFO][5254] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:47.653010 containerd[1447]: 2024-10-09 01:07:47.647 [WARNING][5254] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" HandleID="k8s-pod-network.c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Workload="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:47.653010 containerd[1447]: 2024-10-09 01:07:47.647 [INFO][5254] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" HandleID="k8s-pod-network.c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Workload="localhost-k8s-coredns--7db6d8ff4d--h2g6g-eth0" Oct 9 01:07:47.653010 containerd[1447]: 2024-10-09 01:07:47.648 [INFO][5254] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:47.653010 containerd[1447]: 2024-10-09 01:07:47.650 [INFO][5246] k8s.go 621: Teardown processing complete. ContainerID="c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a" Oct 9 01:07:47.653010 containerd[1447]: time="2024-10-09T01:07:47.652893578Z" level=info msg="TearDown network for sandbox \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\" successfully" Oct 9 01:07:47.656062 containerd[1447]: time="2024-10-09T01:07:47.655890402Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:07:47.656062 containerd[1447]: time="2024-10-09T01:07:47.655992284Z" level=info msg="RemovePodSandbox \"c0b25e2a1229f1f4a8c2f2a8c66039294ae43e1958fcf55f84b7a89db43ef38a\" returns successfully" Oct 9 01:07:47.656770 containerd[1447]: time="2024-10-09T01:07:47.656516895Z" level=info msg="StopPodSandbox for \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\"" Oct 9 01:07:47.730959 containerd[1447]: 2024-10-09 01:07:47.691 [WARNING][5277] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hztch-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"365f47e1-51b7-479f-b333-b53fb198b0fd", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b", Pod:"coredns-7db6d8ff4d-hztch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie50e5a822c5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:47.730959 containerd[1447]: 2024-10-09 01:07:47.691 [INFO][5277] k8s.go 608: Cleaning up netns ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Oct 9 01:07:47.730959 containerd[1447]: 2024-10-09 01:07:47.691 [INFO][5277] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" iface="eth0" netns="" Oct 9 01:07:47.730959 containerd[1447]: 2024-10-09 01:07:47.691 [INFO][5277] k8s.go 615: Releasing IP address(es) ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Oct 9 01:07:47.730959 containerd[1447]: 2024-10-09 01:07:47.692 [INFO][5277] utils.go 188: Calico CNI releasing IP address ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Oct 9 01:07:47.730959 containerd[1447]: 2024-10-09 01:07:47.717 [INFO][5284] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" HandleID="k8s-pod-network.ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Workload="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:47.730959 containerd[1447]: 2024-10-09 01:07:47.717 [INFO][5284] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:47.730959 containerd[1447]: 2024-10-09 01:07:47.717 [INFO][5284] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:47.730959 containerd[1447]: 2024-10-09 01:07:47.726 [WARNING][5284] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" HandleID="k8s-pod-network.ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Workload="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:47.730959 containerd[1447]: 2024-10-09 01:07:47.726 [INFO][5284] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" HandleID="k8s-pod-network.ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Workload="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:47.730959 containerd[1447]: 2024-10-09 01:07:47.728 [INFO][5284] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:47.730959 containerd[1447]: 2024-10-09 01:07:47.729 [INFO][5277] k8s.go 621: Teardown processing complete. ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Oct 9 01:07:47.730959 containerd[1447]: time="2024-10-09T01:07:47.730937849Z" level=info msg="TearDown network for sandbox \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\" successfully" Oct 9 01:07:47.730959 containerd[1447]: time="2024-10-09T01:07:47.730960209Z" level=info msg="StopPodSandbox for \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\" returns successfully" Oct 9 01:07:47.732318 containerd[1447]: time="2024-10-09T01:07:47.732272077Z" level=info msg="RemovePodSandbox for \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\"" Oct 9 01:07:47.732318 containerd[1447]: time="2024-10-09T01:07:47.732314518Z" level=info msg="Forcibly stopping sandbox \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\"" Oct 9 01:07:47.803001 containerd[1447]: 2024-10-09 01:07:47.764 [WARNING][5307] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--hztch-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"365f47e1-51b7-479f-b333-b53fb198b0fd", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"00a234a8cb75f9d30a22fac3fd603e8d02f3c5ad3d3f4c381f1aa8345c04564b", Pod:"coredns-7db6d8ff4d-hztch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie50e5a822c5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:47.803001 containerd[1447]: 2024-10-09 01:07:47.764 [INFO][5307] k8s.go 608: Cleaning up netns ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Oct 9 01:07:47.803001 containerd[1447]: 2024-10-09 01:07:47.764 [INFO][5307] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" iface="eth0" netns="" Oct 9 01:07:47.803001 containerd[1447]: 2024-10-09 01:07:47.764 [INFO][5307] k8s.go 615: Releasing IP address(es) ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Oct 9 01:07:47.803001 containerd[1447]: 2024-10-09 01:07:47.764 [INFO][5307] utils.go 188: Calico CNI releasing IP address ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Oct 9 01:07:47.803001 containerd[1447]: 2024-10-09 01:07:47.786 [INFO][5315] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" HandleID="k8s-pod-network.ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Workload="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:47.803001 containerd[1447]: 2024-10-09 01:07:47.786 [INFO][5315] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:47.803001 containerd[1447]: 2024-10-09 01:07:47.786 [INFO][5315] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:47.803001 containerd[1447]: 2024-10-09 01:07:47.795 [WARNING][5315] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" HandleID="k8s-pod-network.ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Workload="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:47.803001 containerd[1447]: 2024-10-09 01:07:47.795 [INFO][5315] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" HandleID="k8s-pod-network.ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Workload="localhost-k8s-coredns--7db6d8ff4d--hztch-eth0" Oct 9 01:07:47.803001 containerd[1447]: 2024-10-09 01:07:47.799 [INFO][5315] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:47.803001 containerd[1447]: 2024-10-09 01:07:47.801 [INFO][5307] k8s.go 621: Teardown processing complete. ContainerID="ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12" Oct 9 01:07:47.803385 containerd[1447]: time="2024-10-09T01:07:47.803020872Z" level=info msg="TearDown network for sandbox \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\" successfully" Oct 9 01:07:47.805860 containerd[1447]: time="2024-10-09T01:07:47.805821332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:07:47.806470 containerd[1447]: time="2024-10-09T01:07:47.805888894Z" level=info msg="RemovePodSandbox \"ee5141d18475c50180bdb91697eefb93c503094d2749bc103d4c3d0f1d367e12\" returns successfully" Oct 9 01:07:47.806470 containerd[1447]: time="2024-10-09T01:07:47.806436465Z" level=info msg="StopPodSandbox for \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\"" Oct 9 01:07:47.872927 containerd[1447]: 2024-10-09 01:07:47.842 [WARNING][5338] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t6frj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72a149ef-a469-42aa-b8b7-4b018e2ec3a1", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543", Pod:"csi-node-driver-t6frj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali09e912a581a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:47.872927 containerd[1447]: 2024-10-09 01:07:47.842 [INFO][5338] k8s.go 608: Cleaning up netns ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Oct 9 01:07:47.872927 containerd[1447]: 2024-10-09 01:07:47.842 [INFO][5338] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" iface="eth0" netns="" Oct 9 01:07:47.872927 containerd[1447]: 2024-10-09 01:07:47.842 [INFO][5338] k8s.go 615: Releasing IP address(es) ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Oct 9 01:07:47.872927 containerd[1447]: 2024-10-09 01:07:47.842 [INFO][5338] utils.go 188: Calico CNI releasing IP address ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Oct 9 01:07:47.872927 containerd[1447]: 2024-10-09 01:07:47.861 [INFO][5346] ipam_plugin.go 417: Releasing address using handleID ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" HandleID="k8s-pod-network.48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Workload="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:47.872927 containerd[1447]: 2024-10-09 01:07:47.861 [INFO][5346] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:47.872927 containerd[1447]: 2024-10-09 01:07:47.861 [INFO][5346] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:47.872927 containerd[1447]: 2024-10-09 01:07:47.868 [WARNING][5346] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" HandleID="k8s-pod-network.48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Workload="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:47.872927 containerd[1447]: 2024-10-09 01:07:47.868 [INFO][5346] ipam_plugin.go 445: Releasing address using workloadID ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" HandleID="k8s-pod-network.48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Workload="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:47.872927 containerd[1447]: 2024-10-09 01:07:47.870 [INFO][5346] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:47.872927 containerd[1447]: 2024-10-09 01:07:47.871 [INFO][5338] k8s.go 621: Teardown processing complete. ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Oct 9 01:07:47.873944 containerd[1447]: time="2024-10-09T01:07:47.872940649Z" level=info msg="TearDown network for sandbox \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\" successfully" Oct 9 01:07:47.873944 containerd[1447]: time="2024-10-09T01:07:47.872966170Z" level=info msg="StopPodSandbox for \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\" returns successfully" Oct 9 01:07:47.873944 containerd[1447]: time="2024-10-09T01:07:47.873428300Z" level=info msg="RemovePodSandbox for \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\"" Oct 9 01:07:47.873944 containerd[1447]: time="2024-10-09T01:07:47.873459341Z" level=info msg="Forcibly stopping sandbox \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\"" Oct 9 01:07:47.944699 containerd[1447]: 2024-10-09 01:07:47.907 [WARNING][5369] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t6frj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"72a149ef-a469-42aa-b8b7-4b018e2ec3a1", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc061736cc7540ccf585eb16c9da50f78bbf512f3a0840301f64e72123245543", Pod:"csi-node-driver-t6frj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali09e912a581a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:47.944699 containerd[1447]: 2024-10-09 01:07:47.907 [INFO][5369] k8s.go 608: Cleaning up netns ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Oct 9 01:07:47.944699 containerd[1447]: 2024-10-09 01:07:47.907 [INFO][5369] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" iface="eth0" netns="" Oct 9 01:07:47.944699 containerd[1447]: 2024-10-09 01:07:47.907 [INFO][5369] k8s.go 615: Releasing IP address(es) ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Oct 9 01:07:47.944699 containerd[1447]: 2024-10-09 01:07:47.907 [INFO][5369] utils.go 188: Calico CNI releasing IP address ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Oct 9 01:07:47.944699 containerd[1447]: 2024-10-09 01:07:47.931 [INFO][5377] ipam_plugin.go 417: Releasing address using handleID ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" HandleID="k8s-pod-network.48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Workload="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:47.944699 containerd[1447]: 2024-10-09 01:07:47.932 [INFO][5377] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:47.944699 containerd[1447]: 2024-10-09 01:07:47.932 [INFO][5377] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:47.944699 containerd[1447]: 2024-10-09 01:07:47.940 [WARNING][5377] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" HandleID="k8s-pod-network.48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Workload="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:47.944699 containerd[1447]: 2024-10-09 01:07:47.940 [INFO][5377] ipam_plugin.go 445: Releasing address using workloadID ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" HandleID="k8s-pod-network.48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Workload="localhost-k8s-csi--node--driver--t6frj-eth0" Oct 9 01:07:47.944699 containerd[1447]: 2024-10-09 01:07:47.941 [INFO][5377] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:47.944699 containerd[1447]: 2024-10-09 01:07:47.943 [INFO][5369] k8s.go 621: Teardown processing complete. ContainerID="48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f" Oct 9 01:07:47.945071 containerd[1447]: time="2024-10-09T01:07:47.944735747Z" level=info msg="TearDown network for sandbox \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\" successfully" Oct 9 01:07:47.949875 containerd[1447]: time="2024-10-09T01:07:47.949838656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:07:47.949921 containerd[1447]: time="2024-10-09T01:07:47.949898897Z" level=info msg="RemovePodSandbox \"48a98a0e5d5d4ed8cff4d8e0a91cc028985669156668c00b9d4a7027db3ede3f\" returns successfully" Oct 9 01:07:47.950354 containerd[1447]: time="2024-10-09T01:07:47.950325866Z" level=info msg="StopPodSandbox for \"cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406\"" Oct 9 01:07:47.950425 containerd[1447]: time="2024-10-09T01:07:47.950408868Z" level=info msg="TearDown network for sandbox \"cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406\" successfully" Oct 9 01:07:47.950460 containerd[1447]: time="2024-10-09T01:07:47.950424749Z" level=info msg="StopPodSandbox for \"cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406\" returns successfully" Oct 9 01:07:47.950774 containerd[1447]: time="2024-10-09T01:07:47.950730515Z" level=info msg="RemovePodSandbox for \"cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406\"" Oct 9 01:07:47.950815 containerd[1447]: time="2024-10-09T01:07:47.950781916Z" level=info msg="Forcibly stopping sandbox \"cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406\"" Oct 9 01:07:47.950846 containerd[1447]: time="2024-10-09T01:07:47.950835637Z" level=info msg="TearDown network for sandbox \"cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406\" successfully" Oct 9 01:07:47.953103 containerd[1447]: time="2024-10-09T01:07:47.953062805Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:07:47.953142 containerd[1447]: time="2024-10-09T01:07:47.953116686Z" level=info msg="RemovePodSandbox \"cd3888bdd6ac34ac4860debe110547d8179c224e0707ae4c9634db5215121406\" returns successfully" Oct 9 01:07:47.953539 containerd[1447]: time="2024-10-09T01:07:47.953513735Z" level=info msg="StopPodSandbox for \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\"" Oct 9 01:07:48.030972 containerd[1447]: 2024-10-09 01:07:47.998 [WARNING][5400] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0", GenerateName:"calico-kube-controllers-8c6f8c56b-", Namespace:"calico-system", SelfLink:"", UID:"6e9dceb0-a563-4980-b699-4a2e937ee8e7", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8c6f8c56b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e", Pod:"calico-kube-controllers-8c6f8c56b-smw77", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali75aec8976aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:48.030972 containerd[1447]: 2024-10-09 01:07:47.999 [INFO][5400] k8s.go 608: Cleaning up netns ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Oct 9 01:07:48.030972 containerd[1447]: 2024-10-09 01:07:47.999 [INFO][5400] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" iface="eth0" netns="" Oct 9 01:07:48.030972 containerd[1447]: 2024-10-09 01:07:47.999 [INFO][5400] k8s.go 615: Releasing IP address(es) ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Oct 9 01:07:48.030972 containerd[1447]: 2024-10-09 01:07:47.999 [INFO][5400] utils.go 188: Calico CNI releasing IP address ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Oct 9 01:07:48.030972 containerd[1447]: 2024-10-09 01:07:48.018 [INFO][5408] ipam_plugin.go 417: Releasing address using handleID ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" HandleID="k8s-pod-network.a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Workload="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:48.030972 containerd[1447]: 2024-10-09 01:07:48.018 [INFO][5408] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:48.030972 containerd[1447]: 2024-10-09 01:07:48.018 [INFO][5408] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:48.030972 containerd[1447]: 2024-10-09 01:07:48.026 [WARNING][5408] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" HandleID="k8s-pod-network.a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Workload="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:48.030972 containerd[1447]: 2024-10-09 01:07:48.026 [INFO][5408] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" HandleID="k8s-pod-network.a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Workload="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:48.030972 containerd[1447]: 2024-10-09 01:07:48.028 [INFO][5408] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:48.030972 containerd[1447]: 2024-10-09 01:07:48.029 [INFO][5400] k8s.go 621: Teardown processing complete. ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Oct 9 01:07:48.030972 containerd[1447]: time="2024-10-09T01:07:48.030942346Z" level=info msg="TearDown network for sandbox \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\" successfully" Oct 9 01:07:48.030972 containerd[1447]: time="2024-10-09T01:07:48.030969387Z" level=info msg="StopPodSandbox for \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\" returns successfully" Oct 9 01:07:48.032736 containerd[1447]: time="2024-10-09T01:07:48.032676103Z" level=info msg="RemovePodSandbox for \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\"" Oct 9 01:07:48.032736 containerd[1447]: time="2024-10-09T01:07:48.032726864Z" level=info msg="Forcibly stopping sandbox \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\"" Oct 9 01:07:48.102796 containerd[1447]: 2024-10-09 01:07:48.072 [WARNING][5430] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0", GenerateName:"calico-kube-controllers-8c6f8c56b-", Namespace:"calico-system", SelfLink:"", UID:"6e9dceb0-a563-4980-b699-4a2e937ee8e7", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8c6f8c56b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc0cd6dd899ac1ab4d30a04399a588497596e057cdff2ec821bf1c8c846eb71e", Pod:"calico-kube-controllers-8c6f8c56b-smw77", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali75aec8976aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:48.102796 containerd[1447]: 2024-10-09 01:07:48.073 [INFO][5430] k8s.go 608: Cleaning up netns ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Oct 9 01:07:48.102796 containerd[1447]: 2024-10-09 01:07:48.073 [INFO][5430] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" iface="eth0" netns="" Oct 9 01:07:48.102796 containerd[1447]: 2024-10-09 01:07:48.073 [INFO][5430] k8s.go 615: Releasing IP address(es) ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Oct 9 01:07:48.102796 containerd[1447]: 2024-10-09 01:07:48.073 [INFO][5430] utils.go 188: Calico CNI releasing IP address ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Oct 9 01:07:48.102796 containerd[1447]: 2024-10-09 01:07:48.089 [INFO][5439] ipam_plugin.go 417: Releasing address using handleID ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" HandleID="k8s-pod-network.a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Workload="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:48.102796 containerd[1447]: 2024-10-09 01:07:48.089 [INFO][5439] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:48.102796 containerd[1447]: 2024-10-09 01:07:48.089 [INFO][5439] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:48.102796 containerd[1447]: 2024-10-09 01:07:48.097 [WARNING][5439] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" HandleID="k8s-pod-network.a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Workload="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:48.102796 containerd[1447]: 2024-10-09 01:07:48.097 [INFO][5439] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" HandleID="k8s-pod-network.a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Workload="localhost-k8s-calico--kube--controllers--8c6f8c56b--smw77-eth0" Oct 9 01:07:48.102796 containerd[1447]: 2024-10-09 01:07:48.099 [INFO][5439] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:48.102796 containerd[1447]: 2024-10-09 01:07:48.100 [INFO][5430] k8s.go 621: Teardown processing complete. ContainerID="a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792" Oct 9 01:07:48.102796 containerd[1447]: time="2024-10-09T01:07:48.102283819Z" level=info msg="TearDown network for sandbox \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\" successfully" Oct 9 01:07:48.104794 containerd[1447]: time="2024-10-09T01:07:48.104719070Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:07:48.104843 containerd[1447]: time="2024-10-09T01:07:48.104816832Z" level=info msg="RemovePodSandbox \"a91cb672632147ab6e0122a7d7e8650b32ee7186ef2d71b3f82d9c5166f05792\" returns successfully" Oct 9 01:07:48.523397 systemd[1]: Started sshd@18-10.0.0.142:22-10.0.0.1:44220.service - OpenSSH per-connection server daemon (10.0.0.1:44220). Oct 9 01:07:48.571896 sshd[5448]: Accepted publickey for core from 10.0.0.1 port 44220 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:07:48.573764 sshd[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:48.584205 systemd-logind[1432]: New session 19 of user core. Oct 9 01:07:48.588913 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 01:07:48.744047 sshd[5448]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:48.747249 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 01:07:48.748563 systemd[1]: sshd@18-10.0.0.142:22-10.0.0.1:44220.service: Deactivated successfully. Oct 9 01:07:48.750489 systemd-logind[1432]: Session 19 logged out. Waiting for processes to exit. Oct 9 01:07:48.751604 systemd-logind[1432]: Removed session 19. Oct 9 01:07:53.756530 systemd[1]: Started sshd@19-10.0.0.142:22-10.0.0.1:50034.service - OpenSSH per-connection server daemon (10.0.0.1:50034). Oct 9 01:07:53.795293 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 50034 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:07:53.796499 sshd[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:53.800923 systemd-logind[1432]: New session 20 of user core. Oct 9 01:07:53.812918 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 01:07:53.939879 sshd[5494]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:53.942968 systemd[1]: sshd@19-10.0.0.142:22-10.0.0.1:50034.service: Deactivated successfully. Oct 9 01:07:53.944568 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 01:07:53.945189 systemd-logind[1432]: Session 20 logged out. Waiting for processes to exit. Oct 9 01:07:53.945881 systemd-logind[1432]: Removed session 20. Oct 9 01:07:58.950358 systemd[1]: Started sshd@20-10.0.0.142:22-10.0.0.1:50044.service - OpenSSH per-connection server daemon (10.0.0.1:50044). Oct 9 01:07:58.988551 sshd[5509]: Accepted publickey for core from 10.0.0.1 port 50044 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:07:58.990110 sshd[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:07:58.993748 systemd-logind[1432]: New session 21 of user core. Oct 9 01:07:58.999921 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 01:07:59.120952 sshd[5509]: pam_unix(sshd:session): session closed for user core Oct 9 01:07:59.127434 systemd[1]: sshd@20-10.0.0.142:22-10.0.0.1:50044.service: Deactivated successfully. Oct 9 01:07:59.129051 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 01:07:59.129598 systemd-logind[1432]: Session 21 logged out. Waiting for processes to exit. Oct 9 01:07:59.130426 systemd-logind[1432]: Removed session 21. Oct 9 01:07:59.367067 kubelet[2623]: E1009 01:07:59.367020 2623 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"