Oct 8 19:55:07.888423 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 8 19:55:07.888444 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Oct 8 18:22:02 -00 2024 Oct 8 19:55:07.888454 kernel: KASLR enabled Oct 8 19:55:07.888459 kernel: efi: EFI v2.7 by EDK II Oct 8 19:55:07.888465 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 8 19:55:07.888471 kernel: random: crng init done Oct 8 19:55:07.888478 kernel: ACPI: Early table checksum verification disabled Oct 8 19:55:07.888483 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 8 19:55:07.888490 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 8 19:55:07.888497 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:07.888503 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:07.888509 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:07.888515 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:07.888521 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:07.888528 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:07.888536 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:07.888543 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:07.888549 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:55:07.888555 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 8 19:55:07.888562 kernel: NUMA: Failed to initialise from firmware Oct 8 19:55:07.888568 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:55:07.888574 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Oct 8 19:55:07.888581 kernel: Zone ranges: Oct 8 19:55:07.888587 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:55:07.888593 kernel: DMA32 empty Oct 8 19:55:07.888609 kernel: Normal empty Oct 8 19:55:07.888616 kernel: Movable zone start for each node Oct 8 19:55:07.888622 kernel: Early memory node ranges Oct 8 19:55:07.888628 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 8 19:55:07.888635 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 8 19:55:07.888641 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 8 19:55:07.888648 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 8 19:55:07.888654 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 8 19:55:07.888660 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 8 19:55:07.888667 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 8 19:55:07.888673 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:55:07.888683 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 8 19:55:07.888690 kernel: psci: probing for conduit method from ACPI. Oct 8 19:55:07.888697 kernel: psci: PSCIv1.1 detected in firmware. Oct 8 19:55:07.888703 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 19:55:07.888712 kernel: psci: Trusted OS migration not required Oct 8 19:55:07.888719 kernel: psci: SMC Calling Convention v1.1 Oct 8 19:55:07.888726 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 8 19:55:07.888734 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 19:55:07.888740 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 19:55:07.888747 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 8 19:55:07.888754 kernel: Detected PIPT I-cache on CPU0 Oct 8 19:55:07.888761 kernel: CPU features: detected: GIC system register CPU interface Oct 8 19:55:07.888768 kernel: CPU features: detected: Hardware dirty bit management Oct 8 19:55:07.888775 kernel: CPU features: detected: Spectre-v4 Oct 8 19:55:07.888781 kernel: CPU features: detected: Spectre-BHB Oct 8 19:55:07.888788 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 8 19:55:07.888795 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 8 19:55:07.888803 kernel: CPU features: detected: ARM erratum 1418040 Oct 8 19:55:07.888810 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 8 19:55:07.888816 kernel: alternatives: applying boot alternatives Oct 8 19:55:07.888824 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:55:07.888831 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:55:07.888838 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:55:07.888844 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:55:07.888851 kernel: Fallback order for Node 0: 0 Oct 8 19:55:07.888858 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 8 19:55:07.888865 kernel: Policy zone: DMA Oct 8 19:55:07.888871 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:55:07.888879 kernel: software IO TLB: area num 4. Oct 8 19:55:07.888886 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 8 19:55:07.888893 kernel: Memory: 2386788K/2572288K available (10240K kernel code, 2184K rwdata, 8080K rodata, 39104K init, 897K bss, 185500K reserved, 0K cma-reserved) Oct 8 19:55:07.888900 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 19:55:07.888908 kernel: trace event string verifier disabled Oct 8 19:55:07.888914 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:55:07.888921 kernel: rcu: RCU event tracing is enabled. Oct 8 19:55:07.888928 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 19:55:07.888935 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:55:07.888942 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:55:07.888949 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:55:07.888956 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 19:55:07.888964 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 19:55:07.888970 kernel: GICv3: 256 SPIs implemented Oct 8 19:55:07.888977 kernel: GICv3: 0 Extended SPIs implemented Oct 8 19:55:07.888984 kernel: Root IRQ handler: gic_handle_irq Oct 8 19:55:07.888990 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 8 19:55:07.888997 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 8 19:55:07.889004 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 8 19:55:07.889011 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 19:55:07.889018 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 8 19:55:07.889024 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 8 19:55:07.889031 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 8 19:55:07.889040 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:55:07.889046 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:55:07.889053 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 8 19:55:07.889060 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 8 19:55:07.889067 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 8 19:55:07.889074 kernel: arm-pv: using stolen time PV Oct 8 19:55:07.889081 kernel: Console: colour dummy device 80x25 Oct 8 19:55:07.889087 kernel: ACPI: Core revision 20230628 Oct 8 19:55:07.889095 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 8 19:55:07.889102 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:55:07.889110 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 8 19:55:07.889116 kernel: SELinux: Initializing. Oct 8 19:55:07.889123 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:55:07.889130 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:55:07.889137 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:55:07.889144 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:55:07.889151 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:55:07.889158 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:55:07.889165 kernel: Platform MSI: ITS@0x8080000 domain created Oct 8 19:55:07.889173 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 8 19:55:07.889180 kernel: Remapping and enabling EFI services. Oct 8 19:55:07.889187 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:55:07.889194 kernel: Detected PIPT I-cache on CPU1 Oct 8 19:55:07.889201 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 8 19:55:07.889207 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 8 19:55:07.889214 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:55:07.889221 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 8 19:55:07.889228 kernel: Detected PIPT I-cache on CPU2 Oct 8 19:55:07.889235 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 8 19:55:07.889243 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 8 19:55:07.889250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:55:07.889261 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 8 19:55:07.889270 kernel: Detected PIPT I-cache on CPU3 Oct 8 19:55:07.889277 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 8 19:55:07.889284 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 8 19:55:07.889291 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:55:07.889298 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 8 19:55:07.889306 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 19:55:07.889444 kernel: SMP: Total of 4 processors activated. Oct 8 19:55:07.889455 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 19:55:07.889462 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 8 19:55:07.889470 kernel: CPU features: detected: Common not Private translations Oct 8 19:55:07.889477 kernel: CPU features: detected: CRC32 instructions Oct 8 19:55:07.889484 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 8 19:55:07.889492 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 8 19:55:07.889499 kernel: CPU features: detected: LSE atomic instructions Oct 8 19:55:07.889510 kernel: CPU features: detected: Privileged Access Never Oct 8 19:55:07.889517 kernel: CPU features: detected: RAS Extension Support Oct 8 19:55:07.889525 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 8 19:55:07.889532 kernel: CPU: All CPU(s) started at EL1 Oct 8 19:55:07.889540 kernel: alternatives: applying system-wide alternatives Oct 8 19:55:07.889547 kernel: devtmpfs: initialized Oct 8 19:55:07.889554 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:55:07.889562 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 19:55:07.889569 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:55:07.889578 kernel: SMBIOS 3.0.0 present. Oct 8 19:55:07.889585 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 8 19:55:07.889592 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:55:07.889607 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 19:55:07.889615 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 19:55:07.889622 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 19:55:07.889630 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:55:07.889637 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Oct 8 19:55:07.889644 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:55:07.889653 kernel: cpuidle: using governor menu Oct 8 19:55:07.889661 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 19:55:07.889668 kernel: ASID allocator initialised with 32768 entries Oct 8 19:55:07.889675 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:55:07.889682 kernel: Serial: AMBA PL011 UART driver Oct 8 19:55:07.889690 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 8 19:55:07.889697 kernel: Modules: 0 pages in range for non-PLT usage Oct 8 19:55:07.889705 kernel: Modules: 509104 pages in range for PLT usage Oct 8 19:55:07.889712 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:55:07.889720 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:55:07.889728 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 19:55:07.889735 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 19:55:07.889742 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:55:07.889750 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:55:07.889757 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 19:55:07.889765 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 19:55:07.889772 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:55:07.889779 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:55:07.889787 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:55:07.889794 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:55:07.889802 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:55:07.889809 kernel: ACPI: Interpreter enabled Oct 8 19:55:07.889816 kernel: ACPI: Using GIC for interrupt routing Oct 8 19:55:07.889824 kernel: ACPI: MCFG table detected, 1 entries Oct 8 19:55:07.889831 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 8 19:55:07.889838 kernel: printk: console [ttyAMA0] enabled Oct 8 19:55:07.889845 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:55:07.889982 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:55:07.890055 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 19:55:07.890120 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 19:55:07.890182 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 8 19:55:07.890244 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 8 19:55:07.890254 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 8 19:55:07.890261 kernel: PCI host bridge to bus 0000:00 Oct 8 19:55:07.890348 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 8 19:55:07.890410 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 19:55:07.890467 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 8 19:55:07.890523 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:55:07.890616 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 8 19:55:07.890696 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 19:55:07.890767 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 8 19:55:07.890848 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 8 19:55:07.890915 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:55:07.890980 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:55:07.891046 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 8 19:55:07.891110 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 8 19:55:07.891166 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 8 19:55:07.891224 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 19:55:07.891280 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 8 19:55:07.891289 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 19:55:07.891297 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 19:55:07.891304 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 19:55:07.891331 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 19:55:07.891339 kernel: iommu: Default domain type: Translated Oct 8 19:55:07.891347 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 19:55:07.891354 kernel: efivars: Registered efivars operations Oct 8 19:55:07.891364 kernel: vgaarb: loaded Oct 8 19:55:07.891371 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 19:55:07.891378 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:55:07.891386 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:55:07.891393 kernel: pnp: PnP ACPI init Oct 8 19:55:07.891464 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 8 19:55:07.891475 kernel: pnp: PnP ACPI: found 1 devices Oct 8 19:55:07.891482 kernel: NET: Registered PF_INET protocol family Oct 8 19:55:07.891492 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:55:07.891499 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:55:07.891507 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:55:07.891514 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:55:07.891522 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:55:07.891529 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:55:07.891536 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:55:07.891544 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:55:07.891551 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:55:07.891560 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:55:07.891567 kernel: kvm [1]: HYP mode not available Oct 8 19:55:07.891574 kernel: Initialise system trusted keyrings Oct 8 19:55:07.891582 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:55:07.891589 kernel: Key type asymmetric registered Oct 8 19:55:07.891604 kernel: Asymmetric key parser 'x509' registered Oct 8 19:55:07.891612 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 19:55:07.891620 kernel: io scheduler mq-deadline registered Oct 8 19:55:07.891627 kernel: io scheduler kyber registered Oct 8 19:55:07.891639 kernel: io scheduler bfq registered Oct 8 19:55:07.891647 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 19:55:07.891654 kernel: ACPI: button: Power Button [PWRB] Oct 8 19:55:07.891662 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 19:55:07.891730 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 8 19:55:07.891740 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:55:07.891747 kernel: thunder_xcv, ver 1.0 Oct 8 19:55:07.891755 kernel: thunder_bgx, ver 1.0 Oct 8 19:55:07.891762 kernel: nicpf, ver 1.0 Oct 8 19:55:07.891772 kernel: nicvf, ver 1.0 Oct 8 19:55:07.891841 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 19:55:07.891903 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T19:55:07 UTC (1728417307) Oct 8 19:55:07.891913 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 19:55:07.891920 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 8 19:55:07.891928 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 19:55:07.891935 kernel: watchdog: Hard watchdog permanently disabled Oct 8 19:55:07.891943 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:55:07.891952 kernel: Segment Routing with IPv6 Oct 8 19:55:07.891959 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:55:07.891967 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:55:07.891974 kernel: Key type dns_resolver registered Oct 8 19:55:07.891981 kernel: registered taskstats version 1 Oct 8 19:55:07.891988 kernel: Loading compiled-in X.509 certificates Oct 8 19:55:07.891996 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e5b54c43c129014ce5ace0e8cd7b641a0fcb136e' Oct 8 19:55:07.892003 kernel: Key type .fscrypt registered Oct 8 19:55:07.892010 kernel: Key type fscrypt-provisioning registered Oct 8 19:55:07.892019 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:55:07.892026 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:55:07.892033 kernel: ima: No architecture policies found Oct 8 19:55:07.892041 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 19:55:07.892048 kernel: clk: Disabling unused clocks Oct 8 19:55:07.892055 kernel: Freeing unused kernel memory: 39104K Oct 8 19:55:07.892062 kernel: Run /init as init process Oct 8 19:55:07.892069 kernel: with arguments: Oct 8 19:55:07.892077 kernel: /init Oct 8 19:55:07.892085 kernel: with environment: Oct 8 19:55:07.892092 kernel: HOME=/ Oct 8 19:55:07.892099 kernel: TERM=linux Oct 8 19:55:07.892121 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:55:07.892130 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:55:07.892139 systemd[1]: Detected virtualization kvm. Oct 8 19:55:07.892147 systemd[1]: Detected architecture arm64. Oct 8 19:55:07.892154 systemd[1]: Running in initrd. Oct 8 19:55:07.892163 systemd[1]: No hostname configured, using default hostname. Oct 8 19:55:07.892171 systemd[1]: Hostname set to . Oct 8 19:55:07.892179 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:55:07.892186 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:55:07.892194 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:55:07.892202 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:55:07.892211 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:55:07.892218 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:55:07.892228 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:55:07.892236 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:55:07.892245 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:55:07.892253 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:55:07.892261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:55:07.892269 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:55:07.892278 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:55:07.892286 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:55:07.892294 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:55:07.892302 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:55:07.892309 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:55:07.892328 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:55:07.892336 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:55:07.892344 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:55:07.892352 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:55:07.892362 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:55:07.892370 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:55:07.892377 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:55:07.892385 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:55:07.892393 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:55:07.892401 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:55:07.892409 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:55:07.892417 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:55:07.892424 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:55:07.892434 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:55:07.892442 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:55:07.892450 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:55:07.892457 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:55:07.892466 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:55:07.892475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:55:07.892499 systemd-journald[237]: Collecting audit messages is disabled. Oct 8 19:55:07.892519 systemd-journald[237]: Journal started Oct 8 19:55:07.892539 systemd-journald[237]: Runtime Journal (/run/log/journal/37528c7a40e549d3ba8678d6501d5769) is 5.9M, max 47.3M, 41.4M free. Oct 8 19:55:07.900410 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:55:07.900434 kernel: Bridge firewalling registered Oct 8 19:55:07.885322 systemd-modules-load[238]: Inserted module 'overlay' Oct 8 19:55:07.899340 systemd-modules-load[238]: Inserted module 'br_netfilter' Oct 8 19:55:07.904760 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:55:07.904779 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:55:07.905965 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:55:07.907165 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:55:07.912368 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:55:07.919432 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:55:07.920828 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:55:07.922957 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:55:07.925971 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:55:07.930894 dracut-cmdline[265]: dracut-dracut-053 Oct 8 19:55:07.931759 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:55:07.934285 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:55:07.934954 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:55:07.939611 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:55:07.947564 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:55:07.973477 systemd-resolved[300]: Positive Trust Anchors: Oct 8 19:55:07.973493 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:55:07.973523 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:55:07.978149 systemd-resolved[300]: Defaulting to hostname 'linux'. Oct 8 19:55:07.979309 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:55:07.982772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:55:08.007327 kernel: SCSI subsystem initialized Oct 8 19:55:08.011334 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:55:08.019343 kernel: iscsi: registered transport (tcp) Oct 8 19:55:08.033327 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:55:08.033349 kernel: QLogic iSCSI HBA Driver Oct 8 19:55:08.079427 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:55:08.086499 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:55:08.116738 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:55:08.116798 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:55:08.116809 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:55:08.166356 kernel: raid6: neonx8 gen() 15670 MB/s Oct 8 19:55:08.183341 kernel: raid6: neonx4 gen() 15651 MB/s Oct 8 19:55:08.200335 kernel: raid6: neonx2 gen() 13166 MB/s Oct 8 19:55:08.217341 kernel: raid6: neonx1 gen() 10458 MB/s Oct 8 19:55:08.234346 kernel: raid6: int64x8 gen() 6933 MB/s Oct 8 19:55:08.251346 kernel: raid6: int64x4 gen() 7311 MB/s Oct 8 19:55:08.268341 kernel: raid6: int64x2 gen() 6114 MB/s Oct 8 19:55:08.285489 kernel: raid6: int64x1 gen() 5022 MB/s Oct 8 19:55:08.285504 kernel: raid6: using algorithm neonx8 gen() 15670 MB/s Oct 8 19:55:08.303331 kernel: raid6: .... xor() 11908 MB/s, rmw enabled Oct 8 19:55:08.303350 kernel: raid6: using neon recovery algorithm Oct 8 19:55:08.308333 kernel: xor: measuring software checksum speed Oct 8 19:55:08.309444 kernel: 8regs : 18090 MB/sec Oct 8 19:55:08.309456 kernel: 32regs : 19660 MB/sec Oct 8 19:55:08.310703 kernel: arm64_neon : 26848 MB/sec Oct 8 19:55:08.310718 kernel: xor: using function: arm64_neon (26848 MB/sec) Oct 8 19:55:08.361346 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:55:08.372003 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:55:08.378458 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:55:08.389710 systemd-udevd[464]: Using default interface naming scheme 'v255'. Oct 8 19:55:08.392826 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:55:08.399461 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:55:08.410659 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Oct 8 19:55:08.436243 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:55:08.447478 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:55:08.487392 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:55:08.498475 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:55:08.511614 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:55:08.515002 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:55:08.516120 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:55:08.518131 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:55:08.527500 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:55:08.535324 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 8 19:55:08.544646 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 19:55:08.539488 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:55:08.551467 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:55:08.551503 kernel: GPT:9289727 != 19775487 Oct 8 19:55:08.551514 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:55:08.551524 kernel: GPT:9289727 != 19775487 Oct 8 19:55:08.551532 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:55:08.551548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:55:08.553079 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:55:08.553195 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:55:08.556303 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:55:08.557357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:55:08.557534 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:55:08.559642 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:55:08.571265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:55:08.574124 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (514) Oct 8 19:55:08.577333 kernel: BTRFS: device fsid a2a78d47-736b-4018-a518-3cfb16920575 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (512) Oct 8 19:55:08.585356 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:55:08.600229 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 19:55:08.607781 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 19:55:08.612027 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:55:08.615730 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 19:55:08.616947 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 19:55:08.635474 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:55:08.638574 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:55:08.641675 disk-uuid[553]: Primary Header is updated. Oct 8 19:55:08.641675 disk-uuid[553]: Secondary Entries is updated. Oct 8 19:55:08.641675 disk-uuid[553]: Secondary Header is updated. Oct 8 19:55:08.644563 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:55:08.662028 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:55:09.665351 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:55:09.665913 disk-uuid[554]: The operation has completed successfully. Oct 8 19:55:09.686357 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:55:09.686461 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:55:09.706465 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:55:09.710346 sh[576]: Success Oct 8 19:55:09.726348 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 19:55:09.755836 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:55:09.763692 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:55:09.765250 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:55:09.774968 kernel: BTRFS info (device dm-0): first mount of filesystem a2a78d47-736b-4018-a518-3cfb16920575 Oct 8 19:55:09.775001 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:55:09.775012 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:55:09.776801 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:55:09.777470 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:55:09.780562 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:55:09.781793 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:55:09.797434 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:55:09.798880 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:55:09.805771 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:55:09.805811 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:55:09.805827 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:55:09.808441 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:55:09.815451 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:55:09.817417 kernel: BTRFS info (device vda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:55:09.832865 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:55:09.841554 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:55:09.898942 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:55:09.914452 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:55:09.940012 systemd-networkd[768]: lo: Link UP Oct 8 19:55:09.940023 systemd-networkd[768]: lo: Gained carrier Oct 8 19:55:09.940785 systemd-networkd[768]: Enumeration completed Oct 8 19:55:09.941381 ignition[681]: Ignition 2.18.0 Oct 8 19:55:09.941302 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:55:09.941388 ignition[681]: Stage: fetch-offline Oct 8 19:55:09.943121 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:55:09.941422 ignition[681]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:55:09.943125 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:55:09.941430 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:55:09.943916 systemd-networkd[768]: eth0: Link UP Oct 8 19:55:09.941519 ignition[681]: parsed url from cmdline: "" Oct 8 19:55:09.943920 systemd-networkd[768]: eth0: Gained carrier Oct 8 19:55:09.941522 ignition[681]: no config URL provided Oct 8 19:55:09.943933 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:55:09.941529 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:55:09.944505 systemd[1]: Reached target network.target - Network. Oct 8 19:55:09.941537 ignition[681]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:55:09.941560 ignition[681]: op(1): [started] loading QEMU firmware config module Oct 8 19:55:09.962377 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:55:09.941564 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 19:55:09.952523 ignition[681]: op(1): [finished] loading QEMU firmware config module Oct 8 19:55:09.998353 ignition[681]: parsing config with SHA512: 233f773cc5aafceaec387aae103841a1476b4aa1a445655dae456c8f0751cbea27edca9bc256485488c8a294e7642e8b7b0f281681b818cdc4be8a2942790af5 Oct 8 19:55:10.002624 unknown[681]: fetched base config from "system" Oct 8 19:55:10.002636 unknown[681]: fetched user config from "qemu" Oct 8 19:55:10.003053 ignition[681]: fetch-offline: fetch-offline passed Oct 8 19:55:10.005136 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:55:10.003105 ignition[681]: Ignition finished successfully Oct 8 19:55:10.006368 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 19:55:10.014482 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:55:10.026276 ignition[777]: Ignition 2.18.0 Oct 8 19:55:10.026286 ignition[777]: Stage: kargs Oct 8 19:55:10.026473 ignition[777]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:55:10.026482 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:55:10.027358 ignition[777]: kargs: kargs passed Oct 8 19:55:10.027405 ignition[777]: Ignition finished successfully Oct 8 19:55:10.031352 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:55:10.046568 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:55:10.057877 ignition[785]: Ignition 2.18.0 Oct 8 19:55:10.057889 ignition[785]: Stage: disks Oct 8 19:55:10.058081 ignition[785]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:55:10.060949 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:55:10.058091 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:55:10.062187 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:55:10.059009 ignition[785]: disks: disks passed Oct 8 19:55:10.063795 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:55:10.059057 ignition[785]: Ignition finished successfully Oct 8 19:55:10.065787 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:55:10.067422 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:55:10.068765 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:55:10.083573 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:55:10.094554 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:55:10.099215 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:55:10.101979 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:55:10.151341 kernel: EXT4-fs (vda9): mounted filesystem fbf53fb2-c32f-44fa-a235-3100e56d8882 r/w with ordered data mode. Quota mode: none. Oct 8 19:55:10.151351 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:55:10.152708 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:55:10.165427 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:55:10.167270 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:55:10.168499 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:55:10.168602 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:55:10.168634 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:55:10.176073 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:55:10.180354 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) Oct 8 19:55:10.178991 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:55:10.185103 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:55:10.185129 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:55:10.185148 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:55:10.188405 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:55:10.190160 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:55:10.240267 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:55:10.245340 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:55:10.251070 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:55:10.255531 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:55:10.350959 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:55:10.365793 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:55:10.368425 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:55:10.374351 kernel: BTRFS info (device vda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:55:10.401277 ignition[917]: INFO : Ignition 2.18.0 Oct 8 19:55:10.401277 ignition[917]: INFO : Stage: mount Oct 8 19:55:10.401277 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:55:10.401277 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:55:10.405358 ignition[917]: INFO : mount: mount passed Oct 8 19:55:10.405358 ignition[917]: INFO : Ignition finished successfully Oct 8 19:55:10.404580 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:55:10.416507 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:55:10.417589 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:55:10.433710 systemd-resolved[300]: Detected conflict on linux IN A 10.0.0.134 Oct 8 19:55:10.433727 systemd-resolved[300]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Oct 8 19:55:10.774051 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:55:10.785556 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:55:10.791334 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (933) Oct 8 19:55:10.793491 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:55:10.793519 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:55:10.793530 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:55:10.796333 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:55:10.797265 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:55:10.814110 ignition[950]: INFO : Ignition 2.18.0 Oct 8 19:55:10.814110 ignition[950]: INFO : Stage: files Oct 8 19:55:10.815701 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:55:10.815701 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:55:10.815701 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:55:10.818943 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:55:10.818943 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:55:10.818943 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:55:10.818943 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:55:10.818943 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:55:10.818229 unknown[950]: wrote ssh authorized keys file for user: core Oct 8 19:55:10.825565 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:55:10.825565 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 19:55:10.869140 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:55:10.992696 systemd-networkd[768]: eth0: Gained IPv6LL Oct 8 19:55:11.055444 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:55:11.055444 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:55:11.059558 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:55:11.059558 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:55:11.059558 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:55:11.059558 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:55:11.059558 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:55:11.059558 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:55:11.059558 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:55:11.059558 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:55:11.059558 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:55:11.059558 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:55:11.059558 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:55:11.059558 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:55:11.059558 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Oct 8 19:55:11.360160 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 19:55:11.621278 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:55:11.621278 ignition[950]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 19:55:11.624767 ignition[950]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:55:11.624767 ignition[950]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:55:11.624767 ignition[950]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 19:55:11.624767 ignition[950]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 19:55:11.624767 ignition[950]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:55:11.624767 ignition[950]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:55:11.624767 ignition[950]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 19:55:11.624767 ignition[950]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 19:55:11.645478 ignition[950]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:55:11.649695 ignition[950]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:55:11.651416 ignition[950]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 19:55:11.651416 ignition[950]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:55:11.651416 ignition[950]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:55:11.651416 ignition[950]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:55:11.651416 ignition[950]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:55:11.651416 ignition[950]: INFO : files: files passed Oct 8 19:55:11.651416 ignition[950]: INFO : Ignition finished successfully Oct 8 19:55:11.655348 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:55:11.669830 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:55:11.672131 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:55:11.673413 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:55:11.673493 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:55:11.679746 initrd-setup-root-after-ignition[979]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 19:55:11.681904 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:55:11.681904 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:55:11.685023 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:55:11.685393 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:55:11.687828 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:55:11.694482 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:55:11.715838 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:55:11.715974 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:55:11.718051 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:55:11.719643 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:55:11.721281 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:55:11.722206 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:55:11.741139 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:55:11.751486 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:55:11.761443 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:55:11.762713 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:55:11.764807 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:55:11.766562 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:55:11.766698 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:55:11.769064 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:55:11.771041 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:55:11.772679 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:55:11.774214 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:55:11.776100 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:55:11.778054 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:55:11.779861 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:55:11.781601 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:55:11.783539 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:55:11.785128 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:55:11.786655 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:55:11.786777 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:55:11.788932 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:55:11.790829 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:55:11.792752 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:55:11.793358 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:55:11.794734 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:55:11.794851 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:55:11.797470 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:55:11.797598 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:55:11.799501 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:55:11.800945 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:55:11.804372 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:55:11.805499 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:55:11.807375 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:55:11.808795 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:55:11.808893 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:55:11.810249 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:55:11.810348 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:55:11.811754 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:55:11.811859 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:55:11.813647 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:55:11.813750 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:55:11.826541 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:55:11.827381 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:55:11.827507 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:55:11.832552 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:55:11.833414 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:55:11.833585 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:55:11.839251 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:55:11.841416 ignition[1006]: INFO : Ignition 2.18.0 Oct 8 19:55:11.841416 ignition[1006]: INFO : Stage: umount Oct 8 19:55:11.841416 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:55:11.841416 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:55:11.839378 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:55:11.848499 ignition[1006]: INFO : umount: umount passed Oct 8 19:55:11.848499 ignition[1006]: INFO : Ignition finished successfully Oct 8 19:55:11.844181 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:55:11.844271 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:55:11.849819 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:55:11.850500 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:55:11.850621 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:55:11.860701 systemd[1]: Stopped target network.target - Network. Oct 8 19:55:11.865697 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:55:11.865761 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:55:11.867401 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:55:11.867440 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:55:11.868579 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:55:11.868630 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:55:11.869817 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:55:11.869859 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:55:11.871830 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:55:11.873356 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:55:11.881571 systemd-networkd[768]: eth0: DHCPv6 lease lost Oct 8 19:55:11.883499 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:55:11.883671 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:55:11.886441 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:55:11.886566 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:55:11.889013 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:55:11.889077 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:55:11.899436 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:55:11.900569 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:55:11.900651 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:55:11.902865 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:55:11.902915 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:55:11.904834 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:55:11.904888 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:55:11.907380 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:55:11.907430 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:55:11.909720 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:55:11.912842 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:55:11.915034 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:55:11.917544 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:55:11.917618 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:55:11.925873 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:55:11.926011 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:55:11.930208 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:55:11.930376 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:55:11.932774 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:55:11.932815 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:55:11.934064 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:55:11.934100 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:55:11.936441 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:55:11.936499 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:55:11.939449 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:55:11.939500 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:55:11.942296 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:55:11.942442 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:55:11.954484 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:55:11.955547 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:55:11.955622 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:55:11.957758 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:55:11.957810 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:55:11.959992 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:55:11.960084 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:55:11.962157 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:55:11.964413 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:55:11.974859 systemd[1]: Switching root. Oct 8 19:55:12.001784 systemd-journald[237]: Journal stopped Oct 8 19:55:12.772917 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Oct 8 19:55:12.772976 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:55:12.772989 kernel: SELinux: policy capability open_perms=1 Oct 8 19:55:12.772999 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:55:12.773008 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:55:12.773021 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:55:12.773032 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:55:12.773046 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:55:12.773056 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:55:12.773066 kernel: audit: type=1403 audit(1728417312.149:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:55:12.773077 systemd[1]: Successfully loaded SELinux policy in 37.122ms. Oct 8 19:55:12.773095 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.500ms. Oct 8 19:55:12.773109 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:55:12.773120 systemd[1]: Detected virtualization kvm. Oct 8 19:55:12.773133 systemd[1]: Detected architecture arm64. Oct 8 19:55:12.773143 systemd[1]: Detected first boot. Oct 8 19:55:12.773158 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:55:12.773169 zram_generator::config[1050]: No configuration found. Oct 8 19:55:12.773181 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:55:12.773192 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:55:12.773212 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:55:12.773226 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:55:12.773239 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:55:12.773249 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:55:12.773261 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:55:12.773272 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:55:12.773282 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:55:12.773293 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:55:12.773304 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:55:12.773436 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:55:12.773454 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:55:12.773465 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:55:12.773476 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:55:12.773488 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:55:12.773500 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:55:12.773512 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:55:12.773523 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 8 19:55:12.773535 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:55:12.773549 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:55:12.773568 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:55:12.773579 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:55:12.773599 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:55:12.773612 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:55:12.773623 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:55:12.773634 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:55:12.773644 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:55:12.773655 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:55:12.773669 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:55:12.773681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:55:12.773692 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:55:12.773702 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:55:12.773713 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:55:12.773724 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:55:12.773734 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:55:12.773746 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:55:12.773757 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:55:12.773769 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:55:12.773780 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:55:12.773791 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:55:12.773802 systemd[1]: Reached target machines.target - Containers. Oct 8 19:55:12.773812 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:55:12.773824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:55:12.773835 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:55:12.773847 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:55:12.773857 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:55:12.773872 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:55:12.773883 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:55:12.773894 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:55:12.773905 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:55:12.773916 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:55:12.773927 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:55:12.773938 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:55:12.773949 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:55:12.773962 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:55:12.773973 kernel: loop: module loaded Oct 8 19:55:12.773982 kernel: fuse: init (API version 7.39) Oct 8 19:55:12.773993 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:55:12.774004 kernel: ACPI: bus type drm_connector registered Oct 8 19:55:12.774014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:55:12.774025 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:55:12.774036 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:55:12.774046 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:55:12.774058 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:55:12.774069 systemd[1]: Stopped verity-setup.service. Oct 8 19:55:12.774080 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:55:12.774090 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:55:12.774104 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:55:12.774114 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:55:12.774125 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:55:12.774137 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:55:12.774169 systemd-journald[1123]: Collecting audit messages is disabled. Oct 8 19:55:12.774190 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:55:12.774201 systemd-journald[1123]: Journal started Oct 8 19:55:12.774225 systemd-journald[1123]: Runtime Journal (/run/log/journal/37528c7a40e549d3ba8678d6501d5769) is 5.9M, max 47.3M, 41.4M free. Oct 8 19:55:12.540748 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:55:12.561305 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 19:55:12.561705 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:55:12.777811 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:55:12.778658 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:55:12.780104 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:55:12.780256 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:55:12.781687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:55:12.781844 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:55:12.783219 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:55:12.783401 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:55:12.784803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:55:12.784953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:55:12.786487 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:55:12.786661 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:55:12.788016 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:55:12.788171 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:55:12.789555 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:55:12.790977 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:55:12.792756 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:55:12.808535 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:55:12.824462 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:55:12.826816 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:55:12.827884 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:55:12.827936 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:55:12.830035 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:55:12.832493 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:55:12.834772 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:55:12.835951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:55:12.837525 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:55:12.839582 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:55:12.840819 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:55:12.844539 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:55:12.846509 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:55:12.847653 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:55:12.852529 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:55:12.855143 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:55:12.859136 systemd-journald[1123]: Time spent on flushing to /var/log/journal/37528c7a40e549d3ba8678d6501d5769 is 16.654ms for 856 entries. Oct 8 19:55:12.859136 systemd-journald[1123]: System Journal (/var/log/journal/37528c7a40e549d3ba8678d6501d5769) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:55:12.892518 systemd-journald[1123]: Received client request to flush runtime journal. Oct 8 19:55:12.892579 kernel: loop0: detected capacity change from 0 to 59688 Oct 8 19:55:12.892629 kernel: block loop0: the capability attribute has been deprecated. Oct 8 19:55:12.859993 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:55:12.862271 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:55:12.863876 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:55:12.865366 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:55:12.868778 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:55:12.875020 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:55:12.886471 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:55:12.890287 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:55:12.903890 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:55:12.905357 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:55:12.917704 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:55:12.919537 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:55:12.936633 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:55:12.938705 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:55:12.940032 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:55:12.948197 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 19:55:12.951463 kernel: loop1: detected capacity change from 0 to 194512 Oct 8 19:55:12.964356 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Oct 8 19:55:12.964373 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Oct 8 19:55:12.968930 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:55:12.995336 kernel: loop2: detected capacity change from 0 to 113672 Oct 8 19:55:13.031358 kernel: loop3: detected capacity change from 0 to 59688 Oct 8 19:55:13.036393 kernel: loop4: detected capacity change from 0 to 194512 Oct 8 19:55:13.044383 kernel: loop5: detected capacity change from 0 to 113672 Oct 8 19:55:13.047135 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 19:55:13.047550 (sd-merge)[1186]: Merged extensions into '/usr'. Oct 8 19:55:13.055470 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:55:13.055487 systemd[1]: Reloading... Oct 8 19:55:13.114344 zram_generator::config[1210]: No configuration found. Oct 8 19:55:13.149746 ldconfig[1155]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:55:13.205448 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:55:13.242911 systemd[1]: Reloading finished in 187 ms. Oct 8 19:55:13.272108 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:55:13.273611 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:55:13.286515 systemd[1]: Starting ensure-sysext.service... Oct 8 19:55:13.288736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:55:13.311635 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:55:13.311650 systemd[1]: Reloading... Oct 8 19:55:13.321396 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:55:13.321695 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:55:13.322360 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:55:13.322598 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Oct 8 19:55:13.322655 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Oct 8 19:55:13.325029 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:55:13.325043 systemd-tmpfiles[1245]: Skipping /boot Oct 8 19:55:13.331983 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:55:13.332002 systemd-tmpfiles[1245]: Skipping /boot Oct 8 19:55:13.368366 zram_generator::config[1270]: No configuration found. Oct 8 19:55:13.457497 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:55:13.495286 systemd[1]: Reloading finished in 183 ms. Oct 8 19:55:13.515418 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:55:13.529757 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:55:13.536633 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:55:13.539091 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:55:13.541489 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:55:13.547569 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:55:13.552677 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:55:13.557722 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:55:13.579657 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:55:13.581344 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:55:13.588935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:55:13.590572 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Oct 8 19:55:13.592190 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:55:13.597989 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:55:13.606618 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:55:13.608153 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:55:13.609951 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:55:13.612463 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:55:13.614250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:55:13.614389 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:55:13.617076 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:55:13.618887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:55:13.619077 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:55:13.622924 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:55:13.623094 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:55:13.628721 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:55:13.630574 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:55:13.636139 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:55:13.638536 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:55:13.649176 augenrules[1345]: No rules Oct 8 19:55:13.647752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:55:13.652827 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:55:13.657486 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:55:13.658625 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:55:13.664592 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:55:13.665649 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:55:13.667479 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:55:13.669894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:55:13.672173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:55:13.674071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:55:13.674201 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:55:13.680865 systemd-resolved[1311]: Positive Trust Anchors: Oct 8 19:55:13.680880 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:55:13.680918 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:55:13.689535 systemd-resolved[1311]: Defaulting to hostname 'linux'. Oct 8 19:55:13.692232 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:55:13.692410 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:55:13.694122 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:55:13.695746 systemd[1]: Finished ensure-sysext.service. Oct 8 19:55:13.699894 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 8 19:55:13.701341 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1369) Oct 8 19:55:13.704334 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:55:13.705751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:55:13.721564 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:55:13.728962 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:55:13.732747 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:55:13.736521 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:55:13.741052 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1353) Oct 8 19:55:13.739457 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:55:13.741541 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:55:13.742027 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:55:13.742393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:55:13.746114 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:55:13.746246 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:55:13.747609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:55:13.747746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:55:13.754013 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:55:13.754177 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:55:13.758209 systemd-networkd[1371]: lo: Link UP Oct 8 19:55:13.758217 systemd-networkd[1371]: lo: Gained carrier Oct 8 19:55:13.758968 systemd-networkd[1371]: Enumeration completed Oct 8 19:55:13.759829 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:55:13.763006 systemd[1]: Reached target network.target - Network. Oct 8 19:55:13.763123 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:55:13.763137 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:55:13.763877 systemd-networkd[1371]: eth0: Link UP Oct 8 19:55:13.763887 systemd-networkd[1371]: eth0: Gained carrier Oct 8 19:55:13.763902 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:55:13.772483 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:55:13.775816 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:55:13.778663 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:55:13.781502 systemd-networkd[1371]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:55:13.805532 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:55:13.810721 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:55:13.811572 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 19:55:13.811628 systemd-timesyncd[1387]: Initial clock synchronization to Tue 2024-10-08 19:55:13.593446 UTC. Oct 8 19:55:13.812264 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:55:13.830588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:55:13.839758 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:55:13.843019 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:55:13.870848 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:55:13.870882 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:55:13.900988 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:55:13.902451 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:55:13.903589 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:55:13.904711 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:55:13.905900 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:55:13.907327 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:55:13.908389 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:55:13.909514 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:55:13.910646 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:55:13.910678 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:55:13.911551 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:55:13.913459 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:55:13.915844 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:55:13.926286 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:55:13.928418 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:55:13.929909 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:55:13.931025 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:55:13.931990 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:55:13.932813 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:55:13.932846 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:55:13.933721 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:55:13.935733 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:55:13.936992 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:55:13.940440 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:55:13.942274 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:55:13.943610 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:55:13.946578 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:55:13.949241 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:55:13.949822 jq[1412]: false Oct 8 19:55:13.952417 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:55:13.954271 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:55:13.960925 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:55:13.965683 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:55:13.966054 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:55:13.966252 dbus-daemon[1411]: [system] SELinux support is enabled Oct 8 19:55:13.971352 extend-filesystems[1413]: Found loop3 Oct 8 19:55:13.971352 extend-filesystems[1413]: Found loop4 Oct 8 19:55:13.971352 extend-filesystems[1413]: Found loop5 Oct 8 19:55:13.971352 extend-filesystems[1413]: Found vda Oct 8 19:55:13.971352 extend-filesystems[1413]: Found vda1 Oct 8 19:55:13.971352 extend-filesystems[1413]: Found vda2 Oct 8 19:55:13.971352 extend-filesystems[1413]: Found vda3 Oct 8 19:55:13.971352 extend-filesystems[1413]: Found usr Oct 8 19:55:13.971352 extend-filesystems[1413]: Found vda4 Oct 8 19:55:13.971352 extend-filesystems[1413]: Found vda6 Oct 8 19:55:13.971352 extend-filesystems[1413]: Found vda7 Oct 8 19:55:13.971352 extend-filesystems[1413]: Found vda9 Oct 8 19:55:13.971352 extend-filesystems[1413]: Checking size of /dev/vda9 Oct 8 19:55:13.972706 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:55:13.995023 extend-filesystems[1413]: Resized partition /dev/vda9 Oct 8 19:55:13.976249 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:55:13.980034 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:55:13.998866 jq[1431]: true Oct 8 19:55:13.995700 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:55:14.003338 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1347) Oct 8 19:55:14.009869 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:55:14.010038 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:55:14.010308 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:55:14.010474 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:55:14.012451 extend-filesystems[1434]: resize2fs 1.47.0 (5-Feb-2023) Oct 8 19:55:14.013686 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:55:14.013845 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:55:14.017413 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 19:55:14.018548 update_engine[1423]: I1008 19:55:14.018368 1423 main.cc:92] Flatcar Update Engine starting Oct 8 19:55:14.025003 systemd-logind[1419]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 19:55:14.025240 systemd-logind[1419]: New seat seat0. Oct 8 19:55:14.027073 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:55:14.029846 update_engine[1423]: I1008 19:55:14.029685 1423 update_check_scheduler.cc:74] Next update check in 3m47s Oct 8 19:55:14.031425 jq[1437]: true Oct 8 19:55:14.034198 dbus-daemon[1411]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 8 19:55:14.034687 (ntainerd)[1438]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:55:14.039900 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:55:14.043767 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:55:14.043908 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:55:14.056389 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 19:55:14.056430 tar[1436]: linux-arm64/helm Oct 8 19:55:14.070898 extend-filesystems[1434]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 19:55:14.070898 extend-filesystems[1434]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:55:14.070898 extend-filesystems[1434]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 19:55:14.045983 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:55:14.074491 extend-filesystems[1413]: Resized filesystem in /dev/vda9 Oct 8 19:55:14.046082 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:55:14.058561 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:55:14.061520 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:55:14.061697 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:55:14.096981 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:55:14.098950 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:55:14.102796 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 19:55:14.132190 locksmithd[1451]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:55:14.235896 containerd[1438]: time="2024-10-08T19:55:14.235783200Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 8 19:55:14.258572 containerd[1438]: time="2024-10-08T19:55:14.258454656Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:55:14.258572 containerd[1438]: time="2024-10-08T19:55:14.258505667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:55:14.260338 containerd[1438]: time="2024-10-08T19:55:14.260047390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:55:14.260338 containerd[1438]: time="2024-10-08T19:55:14.260079569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:55:14.260338 containerd[1438]: time="2024-10-08T19:55:14.260304000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:55:14.260338 containerd[1438]: time="2024-10-08T19:55:14.260339253Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:55:14.260469 containerd[1438]: time="2024-10-08T19:55:14.260413026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:55:14.260469 containerd[1438]: time="2024-10-08T19:55:14.260460145Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:55:14.260503 containerd[1438]: time="2024-10-08T19:55:14.260471429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:55:14.260550 containerd[1438]: time="2024-10-08T19:55:14.260526837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:55:14.260736 containerd[1438]: time="2024-10-08T19:55:14.260713331Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:55:14.260759 containerd[1438]: time="2024-10-08T19:55:14.260741697Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 8 19:55:14.260759 containerd[1438]: time="2024-10-08T19:55:14.260751930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:55:14.260865 containerd[1438]: time="2024-10-08T19:55:14.260846364Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:55:14.260865 containerd[1438]: time="2024-10-08T19:55:14.260863329Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:55:14.260932 containerd[1438]: time="2024-10-08T19:55:14.260915935Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 8 19:55:14.260955 containerd[1438]: time="2024-10-08T19:55:14.260932005Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:55:14.263760 containerd[1438]: time="2024-10-08T19:55:14.263725106Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:55:14.263760 containerd[1438]: time="2024-10-08T19:55:14.263757128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:55:14.263855 containerd[1438]: time="2024-10-08T19:55:14.263770085Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:55:14.263855 containerd[1438]: time="2024-10-08T19:55:14.263798217Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:55:14.263855 containerd[1438]: time="2024-10-08T19:55:14.263812653Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:55:14.263855 containerd[1438]: time="2024-10-08T19:55:14.263822341Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:55:14.263855 containerd[1438]: time="2024-10-08T19:55:14.263834598Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:55:14.263973 containerd[1438]: time="2024-10-08T19:55:14.263953467Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:55:14.263998 containerd[1438]: time="2024-10-08T19:55:14.263975490Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:55:14.263998 containerd[1438]: time="2024-10-08T19:55:14.263987864Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:55:14.264046 containerd[1438]: time="2024-10-08T19:55:14.264000354Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:55:14.264046 containerd[1438]: time="2024-10-08T19:55:14.264014595Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.265089 containerd[1438]: time="2024-10-08T19:55:14.264320464Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.265232 containerd[1438]: time="2024-10-08T19:55:14.265201888Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.265232 containerd[1438]: time="2024-10-08T19:55:14.265225001Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.265269 containerd[1438]: time="2024-10-08T19:55:14.265239125Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.265269 containerd[1438]: time="2024-10-08T19:55:14.265253483Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.265269 containerd[1438]: time="2024-10-08T19:55:14.265265156Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.265331 containerd[1438]: time="2024-10-08T19:55:14.265276245Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:55:14.265587 containerd[1438]: time="2024-10-08T19:55:14.265444141Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:55:14.265729 containerd[1438]: time="2024-10-08T19:55:14.265707326Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:55:14.265762 containerd[1438]: time="2024-10-08T19:55:14.265739233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.265762 containerd[1438]: time="2024-10-08T19:55:14.265753746Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:55:14.265810 containerd[1438]: time="2024-10-08T19:55:14.265775496Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:55:14.265900 containerd[1438]: time="2024-10-08T19:55:14.265888763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.265927 containerd[1438]: time="2024-10-08T19:55:14.265905844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.265927 containerd[1438]: time="2024-10-08T19:55:14.265918918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.265968 containerd[1438]: time="2024-10-08T19:55:14.265930280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.265968 containerd[1438]: time="2024-10-08T19:55:14.265942069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.265968 containerd[1438]: time="2024-10-08T19:55:14.265953898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.265968 containerd[1438]: time="2024-10-08T19:55:14.265965260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.266036 containerd[1438]: time="2024-10-08T19:55:14.265977049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.266036 containerd[1438]: time="2024-10-08T19:55:14.265989345Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:55:14.266142 containerd[1438]: time="2024-10-08T19:55:14.266109809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.266173 containerd[1438]: time="2024-10-08T19:55:14.266140937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.266173 containerd[1438]: time="2024-10-08T19:55:14.266156501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.266173 containerd[1438]: time="2024-10-08T19:55:14.266168680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.266230 containerd[1438]: time="2024-10-08T19:55:14.266180898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.266230 containerd[1438]: time="2024-10-08T19:55:14.266209302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.266230 containerd[1438]: time="2024-10-08T19:55:14.266221636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.266277 containerd[1438]: time="2024-10-08T19:55:14.266232375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:55:14.266550 containerd[1438]: time="2024-10-08T19:55:14.266493110Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:55:14.266665 containerd[1438]: time="2024-10-08T19:55:14.266554276Z" level=info msg="Connect containerd service" Oct 8 19:55:14.266665 containerd[1438]: time="2024-10-08T19:55:14.266600306Z" level=info msg="using legacy CRI server" Oct 8 19:55:14.266665 containerd[1438]: time="2024-10-08T19:55:14.266607621Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:55:14.266775 containerd[1438]: time="2024-10-08T19:55:14.266757346Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:55:14.267386 containerd[1438]: time="2024-10-08T19:55:14.267360798Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:55:14.267442 containerd[1438]: time="2024-10-08T19:55:14.267414494Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:55:14.267442 containerd[1438]: time="2024-10-08T19:55:14.267432120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:55:14.267481 containerd[1438]: time="2024-10-08T19:55:14.267441848Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:55:14.267481 containerd[1438]: time="2024-10-08T19:55:14.267455622Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:55:14.267906 containerd[1438]: time="2024-10-08T19:55:14.267722076Z" level=info msg="Start subscribing containerd event" Oct 8 19:55:14.267906 containerd[1438]: time="2024-10-08T19:55:14.267781102Z" level=info msg="Start recovering state" Oct 8 19:55:14.267906 containerd[1438]: time="2024-10-08T19:55:14.267853552Z" level=info msg="Start event monitor" Oct 8 19:55:14.267906 containerd[1438]: time="2024-10-08T19:55:14.267873357Z" level=info msg="Start snapshots syncer" Oct 8 19:55:14.267906 containerd[1438]: time="2024-10-08T19:55:14.267882268Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:55:14.267906 containerd[1438]: time="2024-10-08T19:55:14.267889077Z" level=info msg="Start streaming server" Oct 8 19:55:14.268043 containerd[1438]: time="2024-10-08T19:55:14.268000281Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:55:14.268043 containerd[1438]: time="2024-10-08T19:55:14.268037245Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:55:14.268460 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:55:14.269900 containerd[1438]: time="2024-10-08T19:55:14.269765307Z" level=info msg="containerd successfully booted in 0.036889s" Oct 8 19:55:14.387667 tar[1436]: linux-arm64/LICENSE Oct 8 19:55:14.387667 tar[1436]: linux-arm64/README.md Oct 8 19:55:14.400941 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:55:14.602649 sshd_keygen[1426]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:55:14.620731 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:55:14.632578 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:55:14.638062 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:55:14.638331 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:55:14.641146 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:55:14.656345 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:55:14.659245 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:55:14.661559 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 8 19:55:14.662965 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:55:15.664439 systemd-networkd[1371]: eth0: Gained IPv6LL Oct 8 19:55:15.666865 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:55:15.668515 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:55:15.682602 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 19:55:15.685041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:55:15.687058 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:55:15.702516 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 19:55:15.702726 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 19:55:15.704980 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:55:15.705583 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:55:16.149201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:16.151038 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:55:16.153443 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:55:16.155385 systemd[1]: Startup finished in 554ms (kernel) + 4.432s (initrd) + 4.047s (userspace) = 9.034s. Oct 8 19:55:16.617357 kubelet[1523]: E1008 19:55:16.617164 1523 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:55:16.619734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:55:16.619866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:55:19.931905 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:55:19.932959 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:58686.service - OpenSSH per-connection server daemon (10.0.0.1:58686). Oct 8 19:55:19.987398 sshd[1537]: Accepted publickey for core from 10.0.0.1 port 58686 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:55:19.988968 sshd[1537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:55:19.998695 systemd-logind[1419]: New session 1 of user core. Oct 8 19:55:19.999680 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:55:20.013568 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:55:20.024355 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:55:20.026582 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:55:20.033229 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:55:20.105511 systemd[1541]: Queued start job for default target default.target. Oct 8 19:55:20.114340 systemd[1541]: Created slice app.slice - User Application Slice. Oct 8 19:55:20.114369 systemd[1541]: Reached target paths.target - Paths. Oct 8 19:55:20.114381 systemd[1541]: Reached target timers.target - Timers. Oct 8 19:55:20.115616 systemd[1541]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:55:20.125417 systemd[1541]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:55:20.125483 systemd[1541]: Reached target sockets.target - Sockets. Oct 8 19:55:20.125495 systemd[1541]: Reached target basic.target - Basic System. Oct 8 19:55:20.125529 systemd[1541]: Reached target default.target - Main User Target. Oct 8 19:55:20.125556 systemd[1541]: Startup finished in 86ms. Oct 8 19:55:20.125876 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:55:20.127449 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:55:20.186341 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:58690.service - OpenSSH per-connection server daemon (10.0.0.1:58690). Oct 8 19:55:20.220700 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 58690 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:55:20.221931 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:55:20.225831 systemd-logind[1419]: New session 2 of user core. Oct 8 19:55:20.235462 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:55:20.287327 sshd[1552]: pam_unix(sshd:session): session closed for user core Oct 8 19:55:20.298658 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:58690.service: Deactivated successfully. Oct 8 19:55:20.300136 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:55:20.302381 systemd-logind[1419]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:55:20.302787 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:58706.service - OpenSSH per-connection server daemon (10.0.0.1:58706). Oct 8 19:55:20.303976 systemd-logind[1419]: Removed session 2. Oct 8 19:55:20.338256 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 58706 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:55:20.339910 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:55:20.344012 systemd-logind[1419]: New session 3 of user core. Oct 8 19:55:20.359442 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:55:20.407372 sshd[1559]: pam_unix(sshd:session): session closed for user core Oct 8 19:55:20.415459 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:58706.service: Deactivated successfully. Oct 8 19:55:20.416747 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:55:20.417244 systemd-logind[1419]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:55:20.418886 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:58714.service - OpenSSH per-connection server daemon (10.0.0.1:58714). Oct 8 19:55:20.421494 systemd-logind[1419]: Removed session 3. Oct 8 19:55:20.454154 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 58714 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:55:20.455218 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:55:20.458967 systemd-logind[1419]: New session 4 of user core. Oct 8 19:55:20.477451 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:55:20.529274 sshd[1566]: pam_unix(sshd:session): session closed for user core Oct 8 19:55:20.539472 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:58714.service: Deactivated successfully. Oct 8 19:55:20.540664 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:55:20.542439 systemd-logind[1419]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:55:20.543503 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:58728.service - OpenSSH per-connection server daemon (10.0.0.1:58728). Oct 8 19:55:20.544269 systemd-logind[1419]: Removed session 4. Oct 8 19:55:20.579219 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 58728 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:55:20.580413 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:55:20.584479 systemd-logind[1419]: New session 5 of user core. Oct 8 19:55:20.596451 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:55:20.659945 sudo[1576]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:55:20.660203 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:55:20.673127 sudo[1576]: pam_unix(sudo:session): session closed for user root Oct 8 19:55:20.674862 sshd[1573]: pam_unix(sshd:session): session closed for user core Oct 8 19:55:20.683713 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:58728.service: Deactivated successfully. Oct 8 19:55:20.685373 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:55:20.686842 systemd-logind[1419]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:55:20.688255 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:58734.service - OpenSSH per-connection server daemon (10.0.0.1:58734). Oct 8 19:55:20.689172 systemd-logind[1419]: Removed session 5. Oct 8 19:55:20.724207 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 58734 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:55:20.726005 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:55:20.730203 systemd-logind[1419]: New session 6 of user core. Oct 8 19:55:20.737457 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:55:20.789554 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:55:20.789795 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:55:20.792629 sudo[1585]: pam_unix(sudo:session): session closed for user root Oct 8 19:55:20.796974 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:55:20.797199 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:55:20.814611 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:55:20.815649 auditctl[1588]: No rules Oct 8 19:55:20.816478 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:55:20.817394 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:55:20.818968 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:55:20.841408 augenrules[1606]: No rules Oct 8 19:55:20.842685 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:55:20.843692 sudo[1584]: pam_unix(sudo:session): session closed for user root Oct 8 19:55:20.845088 sshd[1581]: pam_unix(sshd:session): session closed for user core Oct 8 19:55:20.857613 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:58734.service: Deactivated successfully. Oct 8 19:55:20.859004 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:55:20.860184 systemd-logind[1419]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:55:20.861364 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:58748.service - OpenSSH per-connection server daemon (10.0.0.1:58748). Oct 8 19:55:20.862055 systemd-logind[1419]: Removed session 6. Oct 8 19:55:20.895675 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 58748 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:55:20.896844 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:55:20.900391 systemd-logind[1419]: New session 7 of user core. Oct 8 19:55:20.910441 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:55:20.959957 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:55:20.960519 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:55:21.059540 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:55:21.059634 (dockerd)[1627]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:55:21.289340 dockerd[1627]: time="2024-10-08T19:55:21.289272520Z" level=info msg="Starting up" Oct 8 19:55:21.376099 dockerd[1627]: time="2024-10-08T19:55:21.375988761Z" level=info msg="Loading containers: start." Oct 8 19:55:21.454333 kernel: Initializing XFRM netlink socket Oct 8 19:55:21.512124 systemd-networkd[1371]: docker0: Link UP Oct 8 19:55:21.529616 dockerd[1627]: time="2024-10-08T19:55:21.529576380Z" level=info msg="Loading containers: done." Oct 8 19:55:21.583227 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck30939428-merged.mount: Deactivated successfully. Oct 8 19:55:21.585019 dockerd[1627]: time="2024-10-08T19:55:21.584974998Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:55:21.585179 dockerd[1627]: time="2024-10-08T19:55:21.585160472Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 8 19:55:21.585296 dockerd[1627]: time="2024-10-08T19:55:21.585280018Z" level=info msg="Daemon has completed initialization" Oct 8 19:55:21.609944 dockerd[1627]: time="2024-10-08T19:55:21.609634731Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:55:21.610165 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:55:22.214867 containerd[1438]: time="2024-10-08T19:55:22.214823832Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 19:55:22.848065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628561302.mount: Deactivated successfully. Oct 8 19:55:24.224668 containerd[1438]: time="2024-10-08T19:55:24.224591841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:24.225501 containerd[1438]: time="2024-10-08T19:55:24.225249849Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=32286060" Oct 8 19:55:24.226173 containerd[1438]: time="2024-10-08T19:55:24.226142681Z" level=info msg="ImageCreate event name:\"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:24.231309 containerd[1438]: time="2024-10-08T19:55:24.229667188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:24.231309 containerd[1438]: time="2024-10-08T19:55:24.230775424Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"32282858\" in 2.015913731s" Oct 8 19:55:24.231309 containerd[1438]: time="2024-10-08T19:55:24.230804216Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\"" Oct 8 19:55:24.249353 containerd[1438]: time="2024-10-08T19:55:24.249327205Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 19:55:25.867224 containerd[1438]: time="2024-10-08T19:55:25.866949193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:25.868086 containerd[1438]: time="2024-10-08T19:55:25.868000237Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=29374206" Oct 8 19:55:25.872501 containerd[1438]: time="2024-10-08T19:55:25.872459205Z" level=info msg="ImageCreate event name:\"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:25.875400 containerd[1438]: time="2024-10-08T19:55:25.875346338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:25.876865 containerd[1438]: time="2024-10-08T19:55:25.876825121Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"30862018\" in 1.62737583s" Oct 8 19:55:25.876907 containerd[1438]: time="2024-10-08T19:55:25.876863598Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\"" Oct 8 19:55:25.897321 containerd[1438]: time="2024-10-08T19:55:25.897273143Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 19:55:26.851098 containerd[1438]: time="2024-10-08T19:55:26.851045635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:26.851591 containerd[1438]: time="2024-10-08T19:55:26.851567673Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=15751219" Oct 8 19:55:26.852405 containerd[1438]: time="2024-10-08T19:55:26.852365788Z" level=info msg="ImageCreate event name:\"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:26.855251 containerd[1438]: time="2024-10-08T19:55:26.855211440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:26.856250 containerd[1438]: time="2024-10-08T19:55:26.856221743Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"17239049\" in 958.903594ms" Oct 8 19:55:26.856322 containerd[1438]: time="2024-10-08T19:55:26.856254880Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\"" Oct 8 19:55:26.870176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:55:26.876453 containerd[1438]: time="2024-10-08T19:55:26.876416907Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 19:55:26.877675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:55:26.974067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:26.978744 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:55:27.063207 kubelet[1855]: E1008 19:55:27.063077 1855 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:55:27.066572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:55:27.066712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:55:27.933723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1792465701.mount: Deactivated successfully. Oct 8 19:55:28.272391 containerd[1438]: time="2024-10-08T19:55:28.272259215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:28.273461 containerd[1438]: time="2024-10-08T19:55:28.273429196Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=25254040" Oct 8 19:55:28.274832 containerd[1438]: time="2024-10-08T19:55:28.274401652Z" level=info msg="ImageCreate event name:\"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:28.276067 containerd[1438]: time="2024-10-08T19:55:28.276013011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:28.276874 containerd[1438]: time="2024-10-08T19:55:28.276789294Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"25253057\" in 1.400333707s" Oct 8 19:55:28.276874 containerd[1438]: time="2024-10-08T19:55:28.276823868Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\"" Oct 8 19:55:28.295923 containerd[1438]: time="2024-10-08T19:55:28.295880595Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:55:28.931434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197534540.mount: Deactivated successfully. Oct 8 19:55:29.463460 containerd[1438]: time="2024-10-08T19:55:29.463401053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:29.464149 containerd[1438]: time="2024-10-08T19:55:29.464095370Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 8 19:55:29.464745 containerd[1438]: time="2024-10-08T19:55:29.464700098Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:29.468071 containerd[1438]: time="2024-10-08T19:55:29.468031003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:29.470064 containerd[1438]: time="2024-10-08T19:55:29.470025601Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.174096963s" Oct 8 19:55:29.470093 containerd[1438]: time="2024-10-08T19:55:29.470066809Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 19:55:29.488920 containerd[1438]: time="2024-10-08T19:55:29.488876977Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:55:29.972054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4197140087.mount: Deactivated successfully. Oct 8 19:55:29.977271 containerd[1438]: time="2024-10-08T19:55:29.977217456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:29.977722 containerd[1438]: time="2024-10-08T19:55:29.977678475Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Oct 8 19:55:29.978733 containerd[1438]: time="2024-10-08T19:55:29.978698310Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:29.980859 containerd[1438]: time="2024-10-08T19:55:29.980819321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:29.981765 containerd[1438]: time="2024-10-08T19:55:29.981726054Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 492.806395ms" Oct 8 19:55:29.981765 containerd[1438]: time="2024-10-08T19:55:29.981760766Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 8 19:55:30.000628 containerd[1438]: time="2024-10-08T19:55:30.000596531Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 19:55:30.745779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1266584864.mount: Deactivated successfully. Oct 8 19:55:32.703905 containerd[1438]: time="2024-10-08T19:55:32.703844423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:32.704465 containerd[1438]: time="2024-10-08T19:55:32.704418086Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Oct 8 19:55:32.708293 containerd[1438]: time="2024-10-08T19:55:32.707500274Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:32.712381 containerd[1438]: time="2024-10-08T19:55:32.712307641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:55:32.713518 containerd[1438]: time="2024-10-08T19:55:32.713476913Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.71284481s" Oct 8 19:55:32.713518 containerd[1438]: time="2024-10-08T19:55:32.713515178Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Oct 8 19:55:37.134357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:55:37.155547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:55:37.164347 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:55:37.164410 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:55:37.164596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:37.167473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:55:37.186591 systemd[1]: Reloading requested from client PID 2070 ('systemctl') (unit session-7.scope)... Oct 8 19:55:37.186608 systemd[1]: Reloading... Oct 8 19:55:37.256340 zram_generator::config[2107]: No configuration found. Oct 8 19:55:37.367940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:55:37.422350 systemd[1]: Reloading finished in 235 ms. Oct 8 19:55:37.462238 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:55:37.462306 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:55:37.462544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:37.464030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:55:37.555647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:37.560050 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:55:37.611491 kubelet[2152]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:55:37.611810 kubelet[2152]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:55:37.611855 kubelet[2152]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:55:37.611979 kubelet[2152]: I1008 19:55:37.611938 2152 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:55:38.611221 kubelet[2152]: I1008 19:55:38.611176 2152 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:55:38.611221 kubelet[2152]: I1008 19:55:38.611208 2152 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:55:38.611449 kubelet[2152]: I1008 19:55:38.611434 2152 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:55:38.628460 kubelet[2152]: E1008 19:55:38.628425 2152 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:38.628837 kubelet[2152]: I1008 19:55:38.628542 2152 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:55:38.640568 kubelet[2152]: I1008 19:55:38.640531 2152 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:55:38.641533 kubelet[2152]: I1008 19:55:38.641501 2152 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:55:38.641744 kubelet[2152]: I1008 19:55:38.641719 2152 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:55:38.641825 kubelet[2152]: I1008 19:55:38.641747 2152 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:55:38.641825 kubelet[2152]: I1008 19:55:38.641757 2152 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:55:38.642856 kubelet[2152]: I1008 19:55:38.642822 2152 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:55:38.648391 kubelet[2152]: I1008 19:55:38.648359 2152 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:55:38.648436 kubelet[2152]: I1008 19:55:38.648395 2152 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:55:38.648436 kubelet[2152]: I1008 19:55:38.648420 2152 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:55:38.648436 kubelet[2152]: I1008 19:55:38.648431 2152 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:55:38.649233 kubelet[2152]: W1008 19:55:38.649121 2152 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:38.649233 kubelet[2152]: E1008 19:55:38.649191 2152 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:38.650059 kubelet[2152]: I1008 19:55:38.650023 2152 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:55:38.650712 kubelet[2152]: W1008 19:55:38.650647 2152 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:38.650712 kubelet[2152]: E1008 19:55:38.650712 2152 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:38.652736 kubelet[2152]: I1008 19:55:38.652613 2152 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:55:38.653172 kubelet[2152]: W1008 19:55:38.653150 2152 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:55:38.654490 kubelet[2152]: I1008 19:55:38.654332 2152 server.go:1256] "Started kubelet" Oct 8 19:55:38.656374 kubelet[2152]: I1008 19:55:38.654641 2152 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:55:38.656374 kubelet[2152]: I1008 19:55:38.654766 2152 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:55:38.656374 kubelet[2152]: I1008 19:55:38.655054 2152 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:55:38.656374 kubelet[2152]: I1008 19:55:38.655530 2152 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:55:38.661068 kubelet[2152]: I1008 19:55:38.658370 2152 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:55:38.663658 kubelet[2152]: I1008 19:55:38.663557 2152 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:55:38.663758 kubelet[2152]: I1008 19:55:38.663667 2152 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:55:38.663903 kubelet[2152]: E1008 19:55:38.663868 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" Oct 8 19:55:38.663947 kubelet[2152]: I1008 19:55:38.663921 2152 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:55:38.664344 kubelet[2152]: W1008 19:55:38.664281 2152 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:38.664434 kubelet[2152]: E1008 19:55:38.664351 2152 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:38.665933 kubelet[2152]: I1008 19:55:38.665880 2152 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:55:38.666035 kubelet[2152]: I1008 19:55:38.665980 2152 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:55:38.667517 kubelet[2152]: I1008 19:55:38.667492 2152 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:55:38.667955 kubelet[2152]: E1008 19:55:38.667923 2152 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc9269761d4fcd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:55:38.654281677 +0000 UTC m=+1.090008639,LastTimestamp:2024-10-08 19:55:38.654281677 +0000 UTC m=+1.090008639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:55:38.669533 kubelet[2152]: E1008 19:55:38.669457 2152 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:55:38.677894 kubelet[2152]: I1008 19:55:38.677865 2152 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:55:38.679763 kubelet[2152]: I1008 19:55:38.679423 2152 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:55:38.679763 kubelet[2152]: I1008 19:55:38.679453 2152 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:55:38.679763 kubelet[2152]: I1008 19:55:38.679468 2152 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:55:38.679763 kubelet[2152]: E1008 19:55:38.679518 2152 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:55:38.681003 kubelet[2152]: W1008 19:55:38.680951 2152 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:38.681144 kubelet[2152]: E1008 19:55:38.681036 2152 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:38.682564 kubelet[2152]: I1008 19:55:38.682526 2152 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:55:38.682564 kubelet[2152]: I1008 19:55:38.682549 2152 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:55:38.682564 kubelet[2152]: I1008 19:55:38.682567 2152 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:55:38.761330 kubelet[2152]: I1008 19:55:38.761247 2152 policy_none.go:49] "None policy: Start" Oct 8 19:55:38.762048 kubelet[2152]: I1008 19:55:38.762019 2152 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:55:38.762142 kubelet[2152]: I1008 19:55:38.762072 2152 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:55:38.764805 kubelet[2152]: I1008 19:55:38.764763 2152 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:55:38.765231 kubelet[2152]: E1008 19:55:38.765193 2152 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Oct 8 19:55:38.768643 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:55:38.781933 kubelet[2152]: E1008 19:55:38.779583 2152 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:55:38.785939 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:55:38.788964 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:55:38.800602 kubelet[2152]: I1008 19:55:38.800553 2152 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:55:38.801036 kubelet[2152]: I1008 19:55:38.800910 2152 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:55:38.801952 kubelet[2152]: E1008 19:55:38.801927 2152 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 19:55:38.864710 kubelet[2152]: E1008 19:55:38.864599 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" Oct 8 19:55:38.966863 kubelet[2152]: I1008 19:55:38.966807 2152 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:55:38.967375 kubelet[2152]: E1008 19:55:38.967354 2152 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Oct 8 19:55:38.980595 kubelet[2152]: I1008 19:55:38.980527 2152 topology_manager.go:215] "Topology Admit Handler" podUID="7727d6af5d39655f57b87922db539193" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:55:38.981759 kubelet[2152]: I1008 19:55:38.981737 2152 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:55:38.982557 kubelet[2152]: I1008 19:55:38.982534 2152 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:55:38.993223 systemd[1]: Created slice kubepods-burstable-pod7727d6af5d39655f57b87922db539193.slice - libcontainer container kubepods-burstable-pod7727d6af5d39655f57b87922db539193.slice. Oct 8 19:55:39.021819 systemd[1]: Created slice kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice - libcontainer container kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice. Oct 8 19:55:39.025074 systemd[1]: Created slice kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice - libcontainer container kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice. Oct 8 19:55:39.064936 kubelet[2152]: I1008 19:55:39.064886 2152 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:39.064936 kubelet[2152]: I1008 19:55:39.064932 2152 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7727d6af5d39655f57b87922db539193-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7727d6af5d39655f57b87922db539193\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:55:39.065092 kubelet[2152]: I1008 19:55:39.064956 2152 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7727d6af5d39655f57b87922db539193-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7727d6af5d39655f57b87922db539193\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:55:39.065092 kubelet[2152]: I1008 19:55:39.064976 2152 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:39.065092 kubelet[2152]: I1008 19:55:39.064995 2152 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:39.065092 kubelet[2152]: I1008 19:55:39.065013 2152 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7727d6af5d39655f57b87922db539193-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7727d6af5d39655f57b87922db539193\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:55:39.065092 kubelet[2152]: I1008 19:55:39.065031 2152 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:39.065192 kubelet[2152]: I1008 19:55:39.065050 2152 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:39.065192 kubelet[2152]: I1008 19:55:39.065068 2152 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:55:39.265815 kubelet[2152]: E1008 19:55:39.265690 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" Oct 8 19:55:39.320891 kubelet[2152]: E1008 19:55:39.320838 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:39.321558 containerd[1438]: time="2024-10-08T19:55:39.321514183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7727d6af5d39655f57b87922db539193,Namespace:kube-system,Attempt:0,}" Oct 8 19:55:39.324777 kubelet[2152]: E1008 19:55:39.324753 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:39.325252 containerd[1438]: time="2024-10-08T19:55:39.325210683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 8 19:55:39.327739 kubelet[2152]: E1008 19:55:39.327481 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:39.327943 containerd[1438]: time="2024-10-08T19:55:39.327904674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 8 19:55:39.369415 kubelet[2152]: I1008 19:55:39.369382 2152 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:55:39.369823 kubelet[2152]: E1008 19:55:39.369786 2152 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Oct 8 19:55:39.766747 kubelet[2152]: W1008 19:55:39.766674 2152 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:39.766747 kubelet[2152]: E1008 19:55:39.766750 2152 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:39.797445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1155284188.mount: Deactivated successfully. Oct 8 19:55:39.805462 containerd[1438]: time="2024-10-08T19:55:39.805408326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:55:39.806598 containerd[1438]: time="2024-10-08T19:55:39.806566244Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:55:39.806941 containerd[1438]: time="2024-10-08T19:55:39.806905316Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:55:39.808157 containerd[1438]: time="2024-10-08T19:55:39.808110669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 8 19:55:39.808834 containerd[1438]: time="2024-10-08T19:55:39.808790610Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:55:39.809886 containerd[1438]: time="2024-10-08T19:55:39.809847706Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:55:39.810570 containerd[1438]: time="2024-10-08T19:55:39.810517138Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:55:39.811759 kubelet[2152]: W1008 19:55:39.811710 2152 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:39.811811 kubelet[2152]: E1008 19:55:39.811769 2152 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:39.815475 containerd[1438]: time="2024-10-08T19:55:39.815421988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:55:39.816355 containerd[1438]: time="2024-10-08T19:55:39.815990198Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 490.677494ms" Oct 8 19:55:39.816902 containerd[1438]: time="2024-10-08T19:55:39.816722928Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 488.717432ms" Oct 8 19:55:39.819353 containerd[1438]: time="2024-10-08T19:55:39.819307465Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 497.678274ms" Oct 8 19:55:39.971074 containerd[1438]: time="2024-10-08T19:55:39.970614016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:55:39.971074 containerd[1438]: time="2024-10-08T19:55:39.970858219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:55:39.971611 containerd[1438]: time="2024-10-08T19:55:39.970789486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:55:39.971611 containerd[1438]: time="2024-10-08T19:55:39.970867170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:55:39.971611 containerd[1438]: time="2024-10-08T19:55:39.970893345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:55:39.971611 containerd[1438]: time="2024-10-08T19:55:39.970907931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:55:39.971611 containerd[1438]: time="2024-10-08T19:55:39.971359773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:55:39.971859 containerd[1438]: time="2024-10-08T19:55:39.971623198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:55:39.971859 containerd[1438]: time="2024-10-08T19:55:39.971651291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:55:39.971859 containerd[1438]: time="2024-10-08T19:55:39.971583677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:55:39.971859 containerd[1438]: time="2024-10-08T19:55:39.971648454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:55:39.971859 containerd[1438]: time="2024-10-08T19:55:39.971666077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:55:39.997644 systemd[1]: Started cri-containerd-1d2f8df072971fef99e826bd196177c723a2548de8bad7915766e7519bfc7acf.scope - libcontainer container 1d2f8df072971fef99e826bd196177c723a2548de8bad7915766e7519bfc7acf. Oct 8 19:55:39.999103 systemd[1]: Started cri-containerd-a6545dd31ac9e9fe481b7a9a119c083b3ca941f5d972418576e4addeab9b7a32.scope - libcontainer container a6545dd31ac9e9fe481b7a9a119c083b3ca941f5d972418576e4addeab9b7a32. Oct 8 19:55:40.000254 systemd[1]: Started cri-containerd-b9c6a6c8d136d3f9df6fa4c35823d8b44f338f4a950ae9daf4db30ab745f71e2.scope - libcontainer container b9c6a6c8d136d3f9df6fa4c35823d8b44f338f4a950ae9daf4db30ab745f71e2. Oct 8 19:55:40.038434 containerd[1438]: time="2024-10-08T19:55:40.038238760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d2f8df072971fef99e826bd196177c723a2548de8bad7915766e7519bfc7acf\"" Oct 8 19:55:40.039830 containerd[1438]: time="2024-10-08T19:55:40.039796121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7727d6af5d39655f57b87922db539193,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6545dd31ac9e9fe481b7a9a119c083b3ca941f5d972418576e4addeab9b7a32\"" Oct 8 19:55:40.041670 containerd[1438]: time="2024-10-08T19:55:40.041626130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9c6a6c8d136d3f9df6fa4c35823d8b44f338f4a950ae9daf4db30ab745f71e2\"" Oct 8 19:55:40.050940 kubelet[2152]: E1008 19:55:40.050892 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:40.051086 kubelet[2152]: E1008 19:55:40.051052 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:40.052096 kubelet[2152]: E1008 19:55:40.052067 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:40.054026 containerd[1438]: time="2024-10-08T19:55:40.053979783Z" level=info msg="CreateContainer within sandbox \"a6545dd31ac9e9fe481b7a9a119c083b3ca941f5d972418576e4addeab9b7a32\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:55:40.054142 containerd[1438]: time="2024-10-08T19:55:40.053991813Z" level=info msg="CreateContainer within sandbox \"1d2f8df072971fef99e826bd196177c723a2548de8bad7915766e7519bfc7acf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:55:40.055391 containerd[1438]: time="2024-10-08T19:55:40.055355777Z" level=info msg="CreateContainer within sandbox \"b9c6a6c8d136d3f9df6fa4c35823d8b44f338f4a950ae9daf4db30ab745f71e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:55:40.066697 kubelet[2152]: E1008 19:55:40.066656 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="1.6s" Oct 8 19:55:40.072688 containerd[1438]: time="2024-10-08T19:55:40.072638055Z" level=info msg="CreateContainer within sandbox \"b9c6a6c8d136d3f9df6fa4c35823d8b44f338f4a950ae9daf4db30ab745f71e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"559707b7f811cc7352f73c6d0ce4b278d945fedcedc4d70f9ca60f272c381781\"" Oct 8 19:55:40.073582 containerd[1438]: time="2024-10-08T19:55:40.073357245Z" level=info msg="StartContainer for \"559707b7f811cc7352f73c6d0ce4b278d945fedcedc4d70f9ca60f272c381781\"" Oct 8 19:55:40.073582 containerd[1438]: time="2024-10-08T19:55:40.073494569Z" level=info msg="CreateContainer within sandbox \"1d2f8df072971fef99e826bd196177c723a2548de8bad7915766e7519bfc7acf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"105a73c19987e2847251539b97d6716dc2a81b559233c365cbcf7584af27be8e\"" Oct 8 19:55:40.073861 containerd[1438]: time="2024-10-08T19:55:40.073832762Z" level=info msg="StartContainer for \"105a73c19987e2847251539b97d6716dc2a81b559233c365cbcf7584af27be8e\"" Oct 8 19:55:40.077958 containerd[1438]: time="2024-10-08T19:55:40.077912865Z" level=info msg="CreateContainer within sandbox \"a6545dd31ac9e9fe481b7a9a119c083b3ca941f5d972418576e4addeab9b7a32\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d30aa0104208731db8dc38d17b42490adbed1c26f35f0cdc959b32994814cf08\"" Oct 8 19:55:40.078716 containerd[1438]: time="2024-10-08T19:55:40.078412682Z" level=info msg="StartContainer for \"d30aa0104208731db8dc38d17b42490adbed1c26f35f0cdc959b32994814cf08\"" Oct 8 19:55:40.099517 systemd[1]: Started cri-containerd-105a73c19987e2847251539b97d6716dc2a81b559233c365cbcf7584af27be8e.scope - libcontainer container 105a73c19987e2847251539b97d6716dc2a81b559233c365cbcf7584af27be8e. Oct 8 19:55:40.103477 systemd[1]: Started cri-containerd-559707b7f811cc7352f73c6d0ce4b278d945fedcedc4d70f9ca60f272c381781.scope - libcontainer container 559707b7f811cc7352f73c6d0ce4b278d945fedcedc4d70f9ca60f272c381781. Oct 8 19:55:40.104380 systemd[1]: Started cri-containerd-d30aa0104208731db8dc38d17b42490adbed1c26f35f0cdc959b32994814cf08.scope - libcontainer container d30aa0104208731db8dc38d17b42490adbed1c26f35f0cdc959b32994814cf08. Oct 8 19:55:40.143721 containerd[1438]: time="2024-10-08T19:55:40.143672109Z" level=info msg="StartContainer for \"105a73c19987e2847251539b97d6716dc2a81b559233c365cbcf7584af27be8e\" returns successfully" Oct 8 19:55:40.163771 containerd[1438]: time="2024-10-08T19:55:40.159741374Z" level=info msg="StartContainer for \"559707b7f811cc7352f73c6d0ce4b278d945fedcedc4d70f9ca60f272c381781\" returns successfully" Oct 8 19:55:40.163771 containerd[1438]: time="2024-10-08T19:55:40.159787455Z" level=info msg="StartContainer for \"d30aa0104208731db8dc38d17b42490adbed1c26f35f0cdc959b32994814cf08\" returns successfully" Oct 8 19:55:40.173427 kubelet[2152]: W1008 19:55:40.171895 2152 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:40.173427 kubelet[2152]: E1008 19:55:40.171949 2152 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:40.173427 kubelet[2152]: I1008 19:55:40.172273 2152 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:55:40.173427 kubelet[2152]: E1008 19:55:40.172589 2152 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Oct 8 19:55:40.195219 kubelet[2152]: W1008 19:55:40.191234 2152 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:40.195219 kubelet[2152]: E1008 19:55:40.191290 2152 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Oct 8 19:55:40.695175 kubelet[2152]: E1008 19:55:40.694462 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:40.696268 kubelet[2152]: E1008 19:55:40.696079 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:40.698682 kubelet[2152]: E1008 19:55:40.698624 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:41.703339 kubelet[2152]: E1008 19:55:41.703286 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:41.774964 kubelet[2152]: I1008 19:55:41.774937 2152 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:55:42.023401 kubelet[2152]: E1008 19:55:42.021555 2152 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 8 19:55:42.108525 kubelet[2152]: I1008 19:55:42.107734 2152 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:55:42.123090 kubelet[2152]: E1008 19:55:42.122936 2152 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:42.223901 kubelet[2152]: E1008 19:55:42.223853 2152 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:42.324421 kubelet[2152]: E1008 19:55:42.324292 2152 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:55:42.650575 kubelet[2152]: I1008 19:55:42.650463 2152 apiserver.go:52] "Watching apiserver" Oct 8 19:55:42.664122 kubelet[2152]: I1008 19:55:42.664085 2152 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:55:44.702044 systemd[1]: Reloading requested from client PID 2432 ('systemctl') (unit session-7.scope)... Oct 8 19:55:44.702062 systemd[1]: Reloading... Oct 8 19:55:44.760572 zram_generator::config[2469]: No configuration found. Oct 8 19:55:44.887332 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:55:44.952047 systemd[1]: Reloading finished in 249 ms. Oct 8 19:55:44.987582 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:55:44.997663 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:55:44.997864 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:44.997984 systemd[1]: kubelet.service: Consumed 1.465s CPU time, 116.9M memory peak, 0B memory swap peak. Oct 8 19:55:45.011794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:55:45.109809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:55:45.115084 (kubelet)[2511]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:55:45.169669 kubelet[2511]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:55:45.169669 kubelet[2511]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:55:45.169669 kubelet[2511]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:55:45.169669 kubelet[2511]: I1008 19:55:45.168661 2511 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:55:45.174989 kubelet[2511]: I1008 19:55:45.174892 2511 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:55:45.174989 kubelet[2511]: I1008 19:55:45.174926 2511 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:55:45.175353 kubelet[2511]: I1008 19:55:45.175329 2511 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:55:45.177576 kubelet[2511]: I1008 19:55:45.177537 2511 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:55:45.179822 kubelet[2511]: I1008 19:55:45.179787 2511 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:55:45.185692 kubelet[2511]: I1008 19:55:45.185179 2511 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:55:45.185692 kubelet[2511]: I1008 19:55:45.185421 2511 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:55:45.185692 kubelet[2511]: I1008 19:55:45.185587 2511 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:55:45.185692 kubelet[2511]: I1008 19:55:45.185608 2511 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:55:45.185692 kubelet[2511]: I1008 19:55:45.185618 2511 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:55:45.185953 kubelet[2511]: I1008 19:55:45.185643 2511 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:55:45.186133 kubelet[2511]: I1008 19:55:45.186121 2511 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:55:45.186825 kubelet[2511]: I1008 19:55:45.186775 2511 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:55:45.186825 kubelet[2511]: I1008 19:55:45.186825 2511 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:55:45.186929 kubelet[2511]: I1008 19:55:45.186839 2511 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:55:45.187795 kubelet[2511]: I1008 19:55:45.187762 2511 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:55:45.190317 kubelet[2511]: I1008 19:55:45.187948 2511 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:55:45.190317 kubelet[2511]: I1008 19:55:45.188335 2511 server.go:1256] "Started kubelet" Oct 8 19:55:45.190317 kubelet[2511]: I1008 19:55:45.189941 2511 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:55:45.191719 kubelet[2511]: I1008 19:55:45.191306 2511 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:55:45.191719 kubelet[2511]: I1008 19:55:45.191661 2511 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:55:45.191719 kubelet[2511]: I1008 19:55:45.191661 2511 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:55:45.197831 kubelet[2511]: I1008 19:55:45.197732 2511 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:55:45.198117 kubelet[2511]: I1008 19:55:45.197850 2511 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:55:45.198712 kubelet[2511]: I1008 19:55:45.198689 2511 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:55:45.198880 kubelet[2511]: I1008 19:55:45.198862 2511 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:55:45.204357 kubelet[2511]: I1008 19:55:45.204015 2511 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:55:45.204357 kubelet[2511]: I1008 19:55:45.204200 2511 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:55:45.206157 kubelet[2511]: I1008 19:55:45.206117 2511 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:55:45.219452 kubelet[2511]: I1008 19:55:45.219371 2511 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:55:45.221268 kubelet[2511]: I1008 19:55:45.221226 2511 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:55:45.221268 kubelet[2511]: I1008 19:55:45.221249 2511 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:55:45.221268 kubelet[2511]: I1008 19:55:45.221266 2511 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:55:45.222612 kubelet[2511]: E1008 19:55:45.221343 2511 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:55:45.241418 kubelet[2511]: I1008 19:55:45.240983 2511 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:55:45.241418 kubelet[2511]: I1008 19:55:45.241009 2511 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:55:45.241418 kubelet[2511]: I1008 19:55:45.241030 2511 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:55:45.241418 kubelet[2511]: I1008 19:55:45.241187 2511 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:55:45.241418 kubelet[2511]: I1008 19:55:45.241207 2511 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:55:45.241418 kubelet[2511]: I1008 19:55:45.241214 2511 policy_none.go:49] "None policy: Start" Oct 8 19:55:45.241987 kubelet[2511]: I1008 19:55:45.241971 2511 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:55:45.242032 kubelet[2511]: I1008 19:55:45.241998 2511 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:55:45.245852 kubelet[2511]: I1008 19:55:45.245394 2511 state_mem.go:75] "Updated machine memory state" Oct 8 19:55:45.250612 kubelet[2511]: I1008 19:55:45.250568 2511 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:55:45.250834 kubelet[2511]: I1008 19:55:45.250817 2511 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:55:45.300786 kubelet[2511]: I1008 19:55:45.300488 2511 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:55:45.307942 kubelet[2511]: I1008 19:55:45.307899 2511 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 8 19:55:45.308198 kubelet[2511]: I1008 19:55:45.308173 2511 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:55:45.322195 kubelet[2511]: I1008 19:55:45.322156 2511 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:55:45.322351 kubelet[2511]: I1008 19:55:45.322250 2511 topology_manager.go:215] "Topology Admit Handler" podUID="7727d6af5d39655f57b87922db539193" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:55:45.322351 kubelet[2511]: I1008 19:55:45.322310 2511 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:55:45.505624 kubelet[2511]: I1008 19:55:45.505510 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7727d6af5d39655f57b87922db539193-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7727d6af5d39655f57b87922db539193\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:55:45.505624 kubelet[2511]: I1008 19:55:45.505560 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:45.505624 kubelet[2511]: I1008 19:55:45.505583 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7727d6af5d39655f57b87922db539193-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7727d6af5d39655f57b87922db539193\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:55:45.505792 kubelet[2511]: I1008 19:55:45.505618 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:45.505792 kubelet[2511]: I1008 19:55:45.505678 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:45.505792 kubelet[2511]: I1008 19:55:45.505699 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:45.505792 kubelet[2511]: I1008 19:55:45.505720 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:55:45.505884 kubelet[2511]: I1008 19:55:45.505784 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:55:45.505884 kubelet[2511]: I1008 19:55:45.505820 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7727d6af5d39655f57b87922db539193-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7727d6af5d39655f57b87922db539193\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:55:45.630894 kubelet[2511]: E1008 19:55:45.630852 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:45.631379 kubelet[2511]: E1008 19:55:45.631274 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:45.631874 kubelet[2511]: E1008 19:55:45.631795 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:46.187684 kubelet[2511]: I1008 19:55:46.187634 2511 apiserver.go:52] "Watching apiserver" Oct 8 19:55:46.204630 kubelet[2511]: I1008 19:55:46.204582 2511 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:55:46.232288 kubelet[2511]: E1008 19:55:46.232246 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:46.233007 kubelet[2511]: E1008 19:55:46.232971 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:46.241442 kubelet[2511]: E1008 19:55:46.241394 2511 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 8 19:55:46.241859 kubelet[2511]: E1008 19:55:46.241836 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:46.265922 kubelet[2511]: I1008 19:55:46.265883 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.2658259090000001 podStartE2EDuration="1.265825909s" podCreationTimestamp="2024-10-08 19:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:55:46.259690415 +0000 UTC m=+1.139513084" watchObservedRunningTime="2024-10-08 19:55:46.265825909 +0000 UTC m=+1.145648578" Oct 8 19:55:46.277141 kubelet[2511]: I1008 19:55:46.277096 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.277058968 podStartE2EDuration="1.277058968s" podCreationTimestamp="2024-10-08 19:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:55:46.276404015 +0000 UTC m=+1.156226683" watchObservedRunningTime="2024-10-08 19:55:46.277058968 +0000 UTC m=+1.156881637" Oct 8 19:55:46.277299 kubelet[2511]: I1008 19:55:46.277233 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.27721491 podStartE2EDuration="1.27721491s" podCreationTimestamp="2024-10-08 19:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:55:46.266165622 +0000 UTC m=+1.145988291" watchObservedRunningTime="2024-10-08 19:55:46.27721491 +0000 UTC m=+1.157037579" Oct 8 19:55:47.233998 kubelet[2511]: E1008 19:55:47.233926 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:49.257677 sudo[1617]: pam_unix(sudo:session): session closed for user root Oct 8 19:55:49.259221 sshd[1614]: pam_unix(sshd:session): session closed for user core Oct 8 19:55:49.261888 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:58748.service: Deactivated successfully. Oct 8 19:55:49.263556 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:55:49.264415 systemd[1]: session-7.scope: Consumed 6.408s CPU time, 134.9M memory peak, 0B memory swap peak. Oct 8 19:55:49.265705 systemd-logind[1419]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:55:49.266744 systemd-logind[1419]: Removed session 7. Oct 8 19:55:51.200038 kubelet[2511]: E1008 19:55:51.199938 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:51.239019 kubelet[2511]: E1008 19:55:51.238941 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:54.843129 kubelet[2511]: E1008 19:55:54.843097 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:55.244943 kubelet[2511]: E1008 19:55:55.244890 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:56.093733 kubelet[2511]: E1008 19:55:56.093578 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:56.246670 kubelet[2511]: E1008 19:55:56.246646 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:55:59.128401 update_engine[1423]: I1008 19:55:59.128352 1423 update_attempter.cc:509] Updating boot flags... Oct 8 19:55:59.165357 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2609) Oct 8 19:55:59.220591 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2608) Oct 8 19:55:59.220732 containerd[1438]: time="2024-10-08T19:55:59.219873669Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:55:59.221081 kubelet[2511]: I1008 19:55:59.219510 2511 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:55:59.221081 kubelet[2511]: I1008 19:55:59.220068 2511 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:56:00.101412 kubelet[2511]: I1008 19:56:00.101339 2511 topology_manager.go:215] "Topology Admit Handler" podUID="f7922e91-6b4d-4e3d-8e0d-a2ad49e7c989" podNamespace="kube-system" podName="kube-proxy-dhhd6" Oct 8 19:56:00.111661 systemd[1]: Created slice kubepods-besteffort-podf7922e91_6b4d_4e3d_8e0d_a2ad49e7c989.slice - libcontainer container kubepods-besteffort-podf7922e91_6b4d_4e3d_8e0d_a2ad49e7c989.slice. Oct 8 19:56:00.114637 kubelet[2511]: I1008 19:56:00.114605 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7922e91-6b4d-4e3d-8e0d-a2ad49e7c989-kube-proxy\") pod \"kube-proxy-dhhd6\" (UID: \"f7922e91-6b4d-4e3d-8e0d-a2ad49e7c989\") " pod="kube-system/kube-proxy-dhhd6" Oct 8 19:56:00.114738 kubelet[2511]: I1008 19:56:00.114647 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7922e91-6b4d-4e3d-8e0d-a2ad49e7c989-xtables-lock\") pod \"kube-proxy-dhhd6\" (UID: \"f7922e91-6b4d-4e3d-8e0d-a2ad49e7c989\") " pod="kube-system/kube-proxy-dhhd6" Oct 8 19:56:00.114738 kubelet[2511]: I1008 19:56:00.114673 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7922e91-6b4d-4e3d-8e0d-a2ad49e7c989-lib-modules\") pod \"kube-proxy-dhhd6\" (UID: \"f7922e91-6b4d-4e3d-8e0d-a2ad49e7c989\") " pod="kube-system/kube-proxy-dhhd6" Oct 8 19:56:00.114738 kubelet[2511]: I1008 19:56:00.114692 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brdcd\" (UniqueName: \"kubernetes.io/projected/f7922e91-6b4d-4e3d-8e0d-a2ad49e7c989-kube-api-access-brdcd\") pod \"kube-proxy-dhhd6\" (UID: \"f7922e91-6b4d-4e3d-8e0d-a2ad49e7c989\") " pod="kube-system/kube-proxy-dhhd6" Oct 8 19:56:00.268344 kubelet[2511]: I1008 19:56:00.268277 2511 topology_manager.go:215] "Topology Admit Handler" podUID="c44c164b-a406-4fe0-9b6e-abaa5d660ee8" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-9lpd9" Oct 8 19:56:00.275678 systemd[1]: Created slice kubepods-besteffort-podc44c164b_a406_4fe0_9b6e_abaa5d660ee8.slice - libcontainer container kubepods-besteffort-podc44c164b_a406_4fe0_9b6e_abaa5d660ee8.slice. Oct 8 19:56:00.315235 kubelet[2511]: I1008 19:56:00.315194 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c44c164b-a406-4fe0-9b6e-abaa5d660ee8-var-lib-calico\") pod \"tigera-operator-5d56685c77-9lpd9\" (UID: \"c44c164b-a406-4fe0-9b6e-abaa5d660ee8\") " pod="tigera-operator/tigera-operator-5d56685c77-9lpd9" Oct 8 19:56:00.315235 kubelet[2511]: I1008 19:56:00.315240 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4p4t\" (UniqueName: \"kubernetes.io/projected/c44c164b-a406-4fe0-9b6e-abaa5d660ee8-kube-api-access-v4p4t\") pod \"tigera-operator-5d56685c77-9lpd9\" (UID: \"c44c164b-a406-4fe0-9b6e-abaa5d660ee8\") " pod="tigera-operator/tigera-operator-5d56685c77-9lpd9" Oct 8 19:56:00.419373 kubelet[2511]: E1008 19:56:00.419249 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:00.419953 containerd[1438]: time="2024-10-08T19:56:00.419911529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dhhd6,Uid:f7922e91-6b4d-4e3d-8e0d-a2ad49e7c989,Namespace:kube-system,Attempt:0,}" Oct 8 19:56:00.445291 containerd[1438]: time="2024-10-08T19:56:00.445193694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:00.445291 containerd[1438]: time="2024-10-08T19:56:00.445248096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:00.445291 containerd[1438]: time="2024-10-08T19:56:00.445263176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:00.445570 containerd[1438]: time="2024-10-08T19:56:00.445273096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:00.468486 systemd[1]: Started cri-containerd-26570e5652f1b4e0baf0818f89581dd52ff2e8542364e4c1d0fc4a4de1954ee7.scope - libcontainer container 26570e5652f1b4e0baf0818f89581dd52ff2e8542364e4c1d0fc4a4de1954ee7. Oct 8 19:56:00.496784 containerd[1438]: time="2024-10-08T19:56:00.496732244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dhhd6,Uid:f7922e91-6b4d-4e3d-8e0d-a2ad49e7c989,Namespace:kube-system,Attempt:0,} returns sandbox id \"26570e5652f1b4e0baf0818f89581dd52ff2e8542364e4c1d0fc4a4de1954ee7\"" Oct 8 19:56:00.497696 kubelet[2511]: E1008 19:56:00.497658 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:00.500355 containerd[1438]: time="2024-10-08T19:56:00.500026547Z" level=info msg="CreateContainer within sandbox \"26570e5652f1b4e0baf0818f89581dd52ff2e8542364e4c1d0fc4a4de1954ee7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:56:00.512778 containerd[1438]: time="2024-10-08T19:56:00.512739232Z" level=info msg="CreateContainer within sandbox \"26570e5652f1b4e0baf0818f89581dd52ff2e8542364e4c1d0fc4a4de1954ee7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"64c958c84a7b003ee987aee39aee7bc24f1b0a5c345e5d2e6f6078ff4427ec61\"" Oct 8 19:56:00.513214 containerd[1438]: time="2024-10-08T19:56:00.513126839Z" level=info msg="StartContainer for \"64c958c84a7b003ee987aee39aee7bc24f1b0a5c345e5d2e6f6078ff4427ec61\"" Oct 8 19:56:00.550470 systemd[1]: Started cri-containerd-64c958c84a7b003ee987aee39aee7bc24f1b0a5c345e5d2e6f6078ff4427ec61.scope - libcontainer container 64c958c84a7b003ee987aee39aee7bc24f1b0a5c345e5d2e6f6078ff4427ec61. Oct 8 19:56:00.579734 containerd[1438]: time="2024-10-08T19:56:00.578626057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-9lpd9,Uid:c44c164b-a406-4fe0-9b6e-abaa5d660ee8,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:56:00.584230 containerd[1438]: time="2024-10-08T19:56:00.584125402Z" level=info msg="StartContainer for \"64c958c84a7b003ee987aee39aee7bc24f1b0a5c345e5d2e6f6078ff4427ec61\" returns successfully" Oct 8 19:56:00.617063 containerd[1438]: time="2024-10-08T19:56:00.616949033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:00.617063 containerd[1438]: time="2024-10-08T19:56:00.617003114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:00.617063 containerd[1438]: time="2024-10-08T19:56:00.617016354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:00.617063 containerd[1438]: time="2024-10-08T19:56:00.617032794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:00.642480 systemd[1]: Started cri-containerd-12e9a872627dbe4cdd7894e6bde7a471a3681b642254ef53e71b3d3473d8962d.scope - libcontainer container 12e9a872627dbe4cdd7894e6bde7a471a3681b642254ef53e71b3d3473d8962d. Oct 8 19:56:00.675247 containerd[1438]: time="2024-10-08T19:56:00.675211071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-9lpd9,Uid:c44c164b-a406-4fe0-9b6e-abaa5d660ee8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"12e9a872627dbe4cdd7894e6bde7a471a3681b642254ef53e71b3d3473d8962d\"" Oct 8 19:56:00.683133 containerd[1438]: time="2024-10-08T19:56:00.682893419Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:56:01.257046 kubelet[2511]: E1008 19:56:01.257000 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:01.831896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3586175952.mount: Deactivated successfully. Oct 8 19:56:03.076375 containerd[1438]: time="2024-10-08T19:56:03.076228286Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:03.077281 containerd[1438]: time="2024-10-08T19:56:03.077050980Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485903" Oct 8 19:56:03.078042 containerd[1438]: time="2024-10-08T19:56:03.077998156Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:03.080437 containerd[1438]: time="2024-10-08T19:56:03.080399475Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:03.097032 containerd[1438]: time="2024-10-08T19:56:03.096884069Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 2.413942128s" Oct 8 19:56:03.097032 containerd[1438]: time="2024-10-08T19:56:03.096929629Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 8 19:56:03.107331 containerd[1438]: time="2024-10-08T19:56:03.107247760Z" level=info msg="CreateContainer within sandbox \"12e9a872627dbe4cdd7894e6bde7a471a3681b642254ef53e71b3d3473d8962d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:56:03.148730 containerd[1438]: time="2024-10-08T19:56:03.148653727Z" level=info msg="CreateContainer within sandbox \"12e9a872627dbe4cdd7894e6bde7a471a3681b642254ef53e71b3d3473d8962d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c1e3a50ddbdf353622a3af03f1fcf15b6f593ccfd01fe44e7673037fcb968d26\"" Oct 8 19:56:03.149233 containerd[1438]: time="2024-10-08T19:56:03.149172856Z" level=info msg="StartContainer for \"c1e3a50ddbdf353622a3af03f1fcf15b6f593ccfd01fe44e7673037fcb968d26\"" Oct 8 19:56:03.176494 systemd[1]: Started cri-containerd-c1e3a50ddbdf353622a3af03f1fcf15b6f593ccfd01fe44e7673037fcb968d26.scope - libcontainer container c1e3a50ddbdf353622a3af03f1fcf15b6f593ccfd01fe44e7673037fcb968d26. Oct 8 19:56:03.200823 containerd[1438]: time="2024-10-08T19:56:03.200778511Z" level=info msg="StartContainer for \"c1e3a50ddbdf353622a3af03f1fcf15b6f593ccfd01fe44e7673037fcb968d26\" returns successfully" Oct 8 19:56:03.312384 kubelet[2511]: I1008 19:56:03.312331 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dhhd6" podStartSLOduration=3.31107198 podStartE2EDuration="3.31107198s" podCreationTimestamp="2024-10-08 19:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:56:01.265559082 +0000 UTC m=+16.145381751" watchObservedRunningTime="2024-10-08 19:56:03.31107198 +0000 UTC m=+18.190894649" Oct 8 19:56:06.696567 kubelet[2511]: I1008 19:56:06.696512 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-9lpd9" podStartSLOduration=4.279586137 podStartE2EDuration="6.696468997s" podCreationTimestamp="2024-10-08 19:56:00 +0000 UTC" firstStartedPulling="2024-10-08 19:56:00.682084363 +0000 UTC m=+15.561907032" lastFinishedPulling="2024-10-08 19:56:03.098967223 +0000 UTC m=+17.978789892" observedRunningTime="2024-10-08 19:56:03.312464283 +0000 UTC m=+18.192286912" watchObservedRunningTime="2024-10-08 19:56:06.696468997 +0000 UTC m=+21.576291666" Oct 8 19:56:06.698608 kubelet[2511]: I1008 19:56:06.697739 2511 topology_manager.go:215] "Topology Admit Handler" podUID="a8694277-8546-4101-8a57-a7722a36257c" podNamespace="calico-system" podName="calico-typha-84495cb787-m8tph" Oct 8 19:56:06.711255 systemd[1]: Created slice kubepods-besteffort-poda8694277_8546_4101_8a57_a7722a36257c.slice - libcontainer container kubepods-besteffort-poda8694277_8546_4101_8a57_a7722a36257c.slice. Oct 8 19:56:06.760865 kubelet[2511]: I1008 19:56:06.760803 2511 topology_manager.go:215] "Topology Admit Handler" podUID="0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676" podNamespace="calico-system" podName="calico-node-kktvz" Oct 8 19:56:06.769351 kubelet[2511]: I1008 19:56:06.769196 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v4st\" (UniqueName: \"kubernetes.io/projected/a8694277-8546-4101-8a57-a7722a36257c-kube-api-access-4v4st\") pod \"calico-typha-84495cb787-m8tph\" (UID: \"a8694277-8546-4101-8a57-a7722a36257c\") " pod="calico-system/calico-typha-84495cb787-m8tph" Oct 8 19:56:06.769351 kubelet[2511]: I1008 19:56:06.769243 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a8694277-8546-4101-8a57-a7722a36257c-typha-certs\") pod \"calico-typha-84495cb787-m8tph\" (UID: \"a8694277-8546-4101-8a57-a7722a36257c\") " pod="calico-system/calico-typha-84495cb787-m8tph" Oct 8 19:56:06.769351 kubelet[2511]: I1008 19:56:06.769268 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8694277-8546-4101-8a57-a7722a36257c-tigera-ca-bundle\") pod \"calico-typha-84495cb787-m8tph\" (UID: \"a8694277-8546-4101-8a57-a7722a36257c\") " pod="calico-system/calico-typha-84495cb787-m8tph" Oct 8 19:56:06.770602 systemd[1]: Created slice kubepods-besteffort-pod0e3c79ec_b2d9_4b96_bb7c_6401f6ff5676.slice - libcontainer container kubepods-besteffort-pod0e3c79ec_b2d9_4b96_bb7c_6401f6ff5676.slice. Oct 8 19:56:06.870170 kubelet[2511]: I1008 19:56:06.870117 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676-policysync\") pod \"calico-node-kktvz\" (UID: \"0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676\") " pod="calico-system/calico-node-kktvz" Oct 8 19:56:06.870170 kubelet[2511]: I1008 19:56:06.870166 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676-tigera-ca-bundle\") pod \"calico-node-kktvz\" (UID: \"0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676\") " pod="calico-system/calico-node-kktvz" Oct 8 19:56:06.870368 kubelet[2511]: I1008 19:56:06.870202 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676-cni-net-dir\") pod \"calico-node-kktvz\" (UID: \"0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676\") " pod="calico-system/calico-node-kktvz" Oct 8 19:56:06.870368 kubelet[2511]: I1008 19:56:06.870349 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676-cni-log-dir\") pod \"calico-node-kktvz\" (UID: \"0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676\") " pod="calico-system/calico-node-kktvz" Oct 8 19:56:06.870419 kubelet[2511]: I1008 19:56:06.870399 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676-flexvol-driver-host\") pod \"calico-node-kktvz\" (UID: \"0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676\") " pod="calico-system/calico-node-kktvz" Oct 8 19:56:06.870439 kubelet[2511]: I1008 19:56:06.870425 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676-lib-modules\") pod \"calico-node-kktvz\" (UID: \"0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676\") " pod="calico-system/calico-node-kktvz" Oct 8 19:56:06.870462 kubelet[2511]: I1008 19:56:06.870446 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676-xtables-lock\") pod \"calico-node-kktvz\" (UID: \"0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676\") " pod="calico-system/calico-node-kktvz" Oct 8 19:56:06.870486 kubelet[2511]: I1008 19:56:06.870473 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676-cni-bin-dir\") pod \"calico-node-kktvz\" (UID: \"0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676\") " pod="calico-system/calico-node-kktvz" Oct 8 19:56:06.870577 kubelet[2511]: I1008 19:56:06.870560 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz996\" (UniqueName: \"kubernetes.io/projected/0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676-kube-api-access-tz996\") pod \"calico-node-kktvz\" (UID: \"0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676\") " pod="calico-system/calico-node-kktvz" Oct 8 19:56:06.870800 kubelet[2511]: I1008 19:56:06.870775 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676-node-certs\") pod \"calico-node-kktvz\" (UID: \"0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676\") " pod="calico-system/calico-node-kktvz" Oct 8 19:56:06.871028 kubelet[2511]: I1008 19:56:06.871011 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676-var-run-calico\") pod \"calico-node-kktvz\" (UID: \"0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676\") " pod="calico-system/calico-node-kktvz" Oct 8 19:56:06.871569 kubelet[2511]: I1008 19:56:06.871215 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676-var-lib-calico\") pod \"calico-node-kktvz\" (UID: \"0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676\") " pod="calico-system/calico-node-kktvz" Oct 8 19:56:06.893778 kubelet[2511]: I1008 19:56:06.893133 2511 topology_manager.go:215] "Topology Admit Handler" podUID="b6bbb92e-10e0-4d4e-8c4d-e05b88c82846" podNamespace="calico-system" podName="csi-node-driver-8q6v8" Oct 8 19:56:06.893778 kubelet[2511]: E1008 19:56:06.893446 2511 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8q6v8" podUID="b6bbb92e-10e0-4d4e-8c4d-e05b88c82846" Oct 8 19:56:06.971587 kubelet[2511]: I1008 19:56:06.971441 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8c2l\" (UniqueName: \"kubernetes.io/projected/b6bbb92e-10e0-4d4e-8c4d-e05b88c82846-kube-api-access-n8c2l\") pod \"csi-node-driver-8q6v8\" (UID: \"b6bbb92e-10e0-4d4e-8c4d-e05b88c82846\") " pod="calico-system/csi-node-driver-8q6v8" Oct 8 19:56:06.971587 kubelet[2511]: I1008 19:56:06.971516 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b6bbb92e-10e0-4d4e-8c4d-e05b88c82846-socket-dir\") pod \"csi-node-driver-8q6v8\" (UID: \"b6bbb92e-10e0-4d4e-8c4d-e05b88c82846\") " pod="calico-system/csi-node-driver-8q6v8" Oct 8 19:56:06.971587 kubelet[2511]: I1008 19:56:06.971542 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b6bbb92e-10e0-4d4e-8c4d-e05b88c82846-registration-dir\") pod \"csi-node-driver-8q6v8\" (UID: \"b6bbb92e-10e0-4d4e-8c4d-e05b88c82846\") " pod="calico-system/csi-node-driver-8q6v8" Oct 8 19:56:06.971933 kubelet[2511]: I1008 19:56:06.971596 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6bbb92e-10e0-4d4e-8c4d-e05b88c82846-kubelet-dir\") pod \"csi-node-driver-8q6v8\" (UID: \"b6bbb92e-10e0-4d4e-8c4d-e05b88c82846\") " pod="calico-system/csi-node-driver-8q6v8" Oct 8 19:56:06.971933 kubelet[2511]: I1008 19:56:06.971669 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b6bbb92e-10e0-4d4e-8c4d-e05b88c82846-varrun\") pod \"csi-node-driver-8q6v8\" (UID: \"b6bbb92e-10e0-4d4e-8c4d-e05b88c82846\") " pod="calico-system/csi-node-driver-8q6v8" Oct 8 19:56:06.975698 kubelet[2511]: E1008 19:56:06.975560 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:06.975698 kubelet[2511]: W1008 19:56:06.975604 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:06.979757 kubelet[2511]: E1008 19:56:06.976308 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:06.984598 kubelet[2511]: E1008 19:56:06.983828 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:06.984598 kubelet[2511]: W1008 19:56:06.983849 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:06.984598 kubelet[2511]: E1008 19:56:06.983889 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:06.984598 kubelet[2511]: E1008 19:56:06.984149 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:06.984598 kubelet[2511]: W1008 19:56:06.984160 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:06.984598 kubelet[2511]: E1008 19:56:06.984183 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:06.984598 kubelet[2511]: E1008 19:56:06.984418 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:06.984598 kubelet[2511]: W1008 19:56:06.984428 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:06.984598 kubelet[2511]: E1008 19:56:06.984508 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:06.984598 kubelet[2511]: E1008 19:56:06.984596 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:06.984872 kubelet[2511]: W1008 19:56:06.984604 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:06.984872 kubelet[2511]: E1008 19:56:06.984643 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:06.984872 kubelet[2511]: E1008 19:56:06.984803 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:06.984872 kubelet[2511]: W1008 19:56:06.984813 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:06.984872 kubelet[2511]: E1008 19:56:06.984842 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:06.985565 kubelet[2511]: E1008 19:56:06.985033 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:06.985565 kubelet[2511]: W1008 19:56:06.985047 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:06.985565 kubelet[2511]: E1008 19:56:06.985065 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:06.985565 kubelet[2511]: E1008 19:56:06.985273 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:06.985565 kubelet[2511]: W1008 19:56:06.985287 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:06.985565 kubelet[2511]: E1008 19:56:06.985303 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:06.985565 kubelet[2511]: E1008 19:56:06.985573 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:06.985809 kubelet[2511]: W1008 19:56:06.985585 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:06.985809 kubelet[2511]: E1008 19:56:06.985599 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.018090 kubelet[2511]: E1008 19:56:07.018035 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:07.018884 containerd[1438]: time="2024-10-08T19:56:07.018839474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84495cb787-m8tph,Uid:a8694277-8546-4101-8a57-a7722a36257c,Namespace:calico-system,Attempt:0,}" Oct 8 19:56:07.053289 containerd[1438]: time="2024-10-08T19:56:07.053136987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:07.053289 containerd[1438]: time="2024-10-08T19:56:07.053202388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:07.053289 containerd[1438]: time="2024-10-08T19:56:07.053233669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:07.053289 containerd[1438]: time="2024-10-08T19:56:07.053249789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:07.074661 kubelet[2511]: E1008 19:56:07.073421 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.074661 kubelet[2511]: W1008 19:56:07.073449 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.074661 kubelet[2511]: E1008 19:56:07.073496 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.074661 kubelet[2511]: E1008 19:56:07.074626 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.074661 kubelet[2511]: W1008 19:56:07.074639 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.074661 kubelet[2511]: E1008 19:56:07.074659 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.078873 kubelet[2511]: E1008 19:56:07.076310 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.078873 kubelet[2511]: W1008 19:56:07.077651 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.078873 kubelet[2511]: E1008 19:56:07.077885 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.078873 kubelet[2511]: E1008 19:56:07.077910 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:07.078873 kubelet[2511]: E1008 19:56:07.077960 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.078873 kubelet[2511]: W1008 19:56:07.077969 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.078873 kubelet[2511]: E1008 19:56:07.078238 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.078873 kubelet[2511]: W1008 19:56:07.078250 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.078873 kubelet[2511]: E1008 19:56:07.078278 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.078873 kubelet[2511]: E1008 19:56:07.078485 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.079271 kubelet[2511]: E1008 19:56:07.078540 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.079271 kubelet[2511]: W1008 19:56:07.078546 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.079271 kubelet[2511]: E1008 19:56:07.078557 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.079271 kubelet[2511]: E1008 19:56:07.078946 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.079271 kubelet[2511]: W1008 19:56:07.078983 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.079271 kubelet[2511]: E1008 19:56:07.079013 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.079271 kubelet[2511]: E1008 19:56:07.079263 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.079607 kubelet[2511]: W1008 19:56:07.079276 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.079607 kubelet[2511]: E1008 19:56:07.079297 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.080644 kubelet[2511]: E1008 19:56:07.079997 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.080644 kubelet[2511]: W1008 19:56:07.080012 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.080644 kubelet[2511]: E1008 19:56:07.080070 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.080644 kubelet[2511]: E1008 19:56:07.080274 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.080644 kubelet[2511]: W1008 19:56:07.080291 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.080644 kubelet[2511]: E1008 19:56:07.080450 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.080644 kubelet[2511]: E1008 19:56:07.080540 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.080644 kubelet[2511]: W1008 19:56:07.080550 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.080644 kubelet[2511]: E1008 19:56:07.080596 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.080900 kubelet[2511]: E1008 19:56:07.080847 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.080900 kubelet[2511]: W1008 19:56:07.080857 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.080943 kubelet[2511]: E1008 19:56:07.080888 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.081086 kubelet[2511]: E1008 19:56:07.081064 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.081086 kubelet[2511]: W1008 19:56:07.081077 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.081746 kubelet[2511]: E1008 19:56:07.081124 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.081746 kubelet[2511]: E1008 19:56:07.081302 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.081746 kubelet[2511]: W1008 19:56:07.081328 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.081746 kubelet[2511]: E1008 19:56:07.081348 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.081899 containerd[1438]: time="2024-10-08T19:56:07.081356736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kktvz,Uid:0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676,Namespace:calico-system,Attempt:0,}" Oct 8 19:56:07.081935 kubelet[2511]: E1008 19:56:07.081772 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.081935 kubelet[2511]: W1008 19:56:07.081784 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.081935 kubelet[2511]: E1008 19:56:07.081801 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.082680 kubelet[2511]: E1008 19:56:07.082001 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.082680 kubelet[2511]: W1008 19:56:07.082376 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.082680 kubelet[2511]: E1008 19:56:07.082438 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.082680 kubelet[2511]: E1008 19:56:07.082679 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.082765 kubelet[2511]: W1008 19:56:07.082691 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.082839 kubelet[2511]: E1008 19:56:07.082804 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.082874 kubelet[2511]: E1008 19:56:07.082844 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.082874 kubelet[2511]: W1008 19:56:07.082853 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.083010 kubelet[2511]: E1008 19:56:07.082987 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.083010 kubelet[2511]: W1008 19:56:07.083007 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.083066 kubelet[2511]: E1008 19:56:07.083011 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.083066 kubelet[2511]: E1008 19:56:07.083019 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.083806 systemd[1]: Started cri-containerd-7b475d39c33a90b61a34d38c616860e3b71e77a5a71e3a47db5f84a8ef98c93b.scope - libcontainer container 7b475d39c33a90b61a34d38c616860e3b71e77a5a71e3a47db5f84a8ef98c93b. Oct 8 19:56:07.084104 kubelet[2511]: E1008 19:56:07.083947 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.084104 kubelet[2511]: W1008 19:56:07.083964 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.084104 kubelet[2511]: E1008 19:56:07.083988 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.084603 kubelet[2511]: E1008 19:56:07.084577 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.084603 kubelet[2511]: W1008 19:56:07.084598 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.084719 kubelet[2511]: E1008 19:56:07.084629 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.084908 kubelet[2511]: E1008 19:56:07.084886 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.084908 kubelet[2511]: W1008 19:56:07.084901 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.084968 kubelet[2511]: E1008 19:56:07.084925 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.086143 kubelet[2511]: E1008 19:56:07.086108 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.086221 kubelet[2511]: W1008 19:56:07.086160 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.086221 kubelet[2511]: E1008 19:56:07.086189 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.086918 kubelet[2511]: E1008 19:56:07.086594 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.086918 kubelet[2511]: W1008 19:56:07.086669 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.086918 kubelet[2511]: E1008 19:56:07.086694 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.087461 kubelet[2511]: E1008 19:56:07.087437 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.087461 kubelet[2511]: W1008 19:56:07.087457 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.087568 kubelet[2511]: E1008 19:56:07.087474 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.097175 kubelet[2511]: E1008 19:56:07.097137 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:07.097175 kubelet[2511]: W1008 19:56:07.097161 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:07.097326 kubelet[2511]: E1008 19:56:07.097182 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:07.118462 containerd[1438]: time="2024-10-08T19:56:07.118417407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84495cb787-m8tph,Uid:a8694277-8546-4101-8a57-a7722a36257c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b475d39c33a90b61a34d38c616860e3b71e77a5a71e3a47db5f84a8ef98c93b\"" Oct 8 19:56:07.119227 kubelet[2511]: E1008 19:56:07.119132 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:07.127115 containerd[1438]: time="2024-10-08T19:56:07.127086327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:56:07.164282 containerd[1438]: time="2024-10-08T19:56:07.164136118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:07.164282 containerd[1438]: time="2024-10-08T19:56:07.164194799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:07.164282 containerd[1438]: time="2024-10-08T19:56:07.164208279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:07.164282 containerd[1438]: time="2024-10-08T19:56:07.164217279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:07.184491 systemd[1]: Started cri-containerd-8c09671fa4fc0938a1c272354bbb006f3b6fa3a810fb0d66b7aa3a1bab2775a3.scope - libcontainer container 8c09671fa4fc0938a1c272354bbb006f3b6fa3a810fb0d66b7aa3a1bab2775a3. Oct 8 19:56:07.208092 containerd[1438]: time="2024-10-08T19:56:07.208000083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kktvz,Uid:0e3c79ec-b2d9-4b96-bb7c-6401f6ff5676,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c09671fa4fc0938a1c272354bbb006f3b6fa3a810fb0d66b7aa3a1bab2775a3\"" Oct 8 19:56:07.208762 kubelet[2511]: E1008 19:56:07.208738 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:08.221940 kubelet[2511]: E1008 19:56:08.221902 2511 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8q6v8" podUID="b6bbb92e-10e0-4d4e-8c4d-e05b88c82846" Oct 8 19:56:08.836720 containerd[1438]: time="2024-10-08T19:56:08.836659204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:08.837358 containerd[1438]: time="2024-10-08T19:56:08.837301733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 8 19:56:08.837940 containerd[1438]: time="2024-10-08T19:56:08.837904421Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:08.840454 containerd[1438]: time="2024-10-08T19:56:08.840422414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:08.841286 containerd[1438]: time="2024-10-08T19:56:08.841122063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 1.714000256s" Oct 8 19:56:08.841286 containerd[1438]: time="2024-10-08T19:56:08.841156984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 8 19:56:08.841832 containerd[1438]: time="2024-10-08T19:56:08.841632630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:56:08.850989 containerd[1438]: time="2024-10-08T19:56:08.849633895Z" level=info msg="CreateContainer within sandbox \"7b475d39c33a90b61a34d38c616860e3b71e77a5a71e3a47db5f84a8ef98c93b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:56:08.903929 containerd[1438]: time="2024-10-08T19:56:08.903863931Z" level=info msg="CreateContainer within sandbox \"7b475d39c33a90b61a34d38c616860e3b71e77a5a71e3a47db5f84a8ef98c93b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a72630375504d86f018737a1d5e4e8601fdcc644cbbfa62ce5ba216d65911c13\"" Oct 8 19:56:08.904687 containerd[1438]: time="2024-10-08T19:56:08.904539220Z" level=info msg="StartContainer for \"a72630375504d86f018737a1d5e4e8601fdcc644cbbfa62ce5ba216d65911c13\"" Oct 8 19:56:08.924756 systemd[1]: run-containerd-runc-k8s.io-a72630375504d86f018737a1d5e4e8601fdcc644cbbfa62ce5ba216d65911c13-runc.b2kapM.mount: Deactivated successfully. Oct 8 19:56:08.934489 systemd[1]: Started cri-containerd-a72630375504d86f018737a1d5e4e8601fdcc644cbbfa62ce5ba216d65911c13.scope - libcontainer container a72630375504d86f018737a1d5e4e8601fdcc644cbbfa62ce5ba216d65911c13. Oct 8 19:56:08.965524 containerd[1438]: time="2024-10-08T19:56:08.965457024Z" level=info msg="StartContainer for \"a72630375504d86f018737a1d5e4e8601fdcc644cbbfa62ce5ba216d65911c13\" returns successfully" Oct 8 19:56:09.300246 kubelet[2511]: E1008 19:56:09.300212 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:09.334069 kubelet[2511]: I1008 19:56:09.334009 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-84495cb787-m8tph" podStartSLOduration=1.6191356350000001 podStartE2EDuration="3.333968062s" podCreationTimestamp="2024-10-08 19:56:06 +0000 UTC" firstStartedPulling="2024-10-08 19:56:07.12662212 +0000 UTC m=+22.006444789" lastFinishedPulling="2024-10-08 19:56:08.841454547 +0000 UTC m=+23.721277216" observedRunningTime="2024-10-08 19:56:09.333352374 +0000 UTC m=+24.213175043" watchObservedRunningTime="2024-10-08 19:56:09.333968062 +0000 UTC m=+24.213790731" Oct 8 19:56:09.378422 kubelet[2511]: E1008 19:56:09.378380 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.378422 kubelet[2511]: W1008 19:56:09.378405 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.378422 kubelet[2511]: E1008 19:56:09.378427 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.378602 kubelet[2511]: E1008 19:56:09.378579 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.378602 kubelet[2511]: W1008 19:56:09.378586 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.378602 kubelet[2511]: E1008 19:56:09.378598 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.378774 kubelet[2511]: E1008 19:56:09.378749 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.378774 kubelet[2511]: W1008 19:56:09.378760 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.378774 kubelet[2511]: E1008 19:56:09.378770 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.378922 kubelet[2511]: E1008 19:56:09.378903 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.378922 kubelet[2511]: W1008 19:56:09.378913 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.378922 kubelet[2511]: E1008 19:56:09.378923 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.379076 kubelet[2511]: E1008 19:56:09.379060 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.379076 kubelet[2511]: W1008 19:56:09.379070 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.379130 kubelet[2511]: E1008 19:56:09.379079 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.379213 kubelet[2511]: E1008 19:56:09.379202 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.379213 kubelet[2511]: W1008 19:56:09.379211 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.379267 kubelet[2511]: E1008 19:56:09.379253 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.379483 kubelet[2511]: E1008 19:56:09.379459 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.379483 kubelet[2511]: W1008 19:56:09.379473 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.379483 kubelet[2511]: E1008 19:56:09.379484 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.379673 kubelet[2511]: E1008 19:56:09.379646 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.379673 kubelet[2511]: W1008 19:56:09.379657 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.379673 kubelet[2511]: E1008 19:56:09.379667 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.379843 kubelet[2511]: E1008 19:56:09.379831 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.379843 kubelet[2511]: W1008 19:56:09.379842 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.379898 kubelet[2511]: E1008 19:56:09.379852 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.380002 kubelet[2511]: E1008 19:56:09.379991 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.380035 kubelet[2511]: W1008 19:56:09.380001 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.380035 kubelet[2511]: E1008 19:56:09.380012 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.380169 kubelet[2511]: E1008 19:56:09.380152 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.380169 kubelet[2511]: W1008 19:56:09.380167 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.380215 kubelet[2511]: E1008 19:56:09.380177 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.380344 kubelet[2511]: E1008 19:56:09.380325 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.380344 kubelet[2511]: W1008 19:56:09.380336 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.380344 kubelet[2511]: E1008 19:56:09.380346 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.380530 kubelet[2511]: E1008 19:56:09.380515 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.380530 kubelet[2511]: W1008 19:56:09.380527 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.380590 kubelet[2511]: E1008 19:56:09.380540 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.380704 kubelet[2511]: E1008 19:56:09.380692 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.380704 kubelet[2511]: W1008 19:56:09.380702 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.380758 kubelet[2511]: E1008 19:56:09.380712 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.380876 kubelet[2511]: E1008 19:56:09.380865 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.380906 kubelet[2511]: W1008 19:56:09.380876 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.380906 kubelet[2511]: E1008 19:56:09.380887 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.396960 kubelet[2511]: E1008 19:56:09.396922 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.396960 kubelet[2511]: W1008 19:56:09.396944 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.396960 kubelet[2511]: E1008 19:56:09.396965 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.397219 kubelet[2511]: E1008 19:56:09.397197 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.397219 kubelet[2511]: W1008 19:56:09.397209 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.397270 kubelet[2511]: E1008 19:56:09.397230 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.397459 kubelet[2511]: E1008 19:56:09.397444 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.397459 kubelet[2511]: W1008 19:56:09.397455 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.397525 kubelet[2511]: E1008 19:56:09.397472 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.397674 kubelet[2511]: E1008 19:56:09.397661 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.397674 kubelet[2511]: W1008 19:56:09.397671 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.397730 kubelet[2511]: E1008 19:56:09.397688 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.397893 kubelet[2511]: E1008 19:56:09.397868 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.397893 kubelet[2511]: W1008 19:56:09.397878 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.397893 kubelet[2511]: E1008 19:56:09.397893 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.398091 kubelet[2511]: E1008 19:56:09.398070 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.398091 kubelet[2511]: W1008 19:56:09.398082 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.398140 kubelet[2511]: E1008 19:56:09.398098 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.398306 kubelet[2511]: E1008 19:56:09.398270 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.398306 kubelet[2511]: W1008 19:56:09.398281 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.398427 kubelet[2511]: E1008 19:56:09.398325 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.398470 kubelet[2511]: E1008 19:56:09.398459 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.398470 kubelet[2511]: W1008 19:56:09.398468 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.398516 kubelet[2511]: E1008 19:56:09.398502 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.398629 kubelet[2511]: E1008 19:56:09.398619 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.398629 kubelet[2511]: W1008 19:56:09.398629 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.398676 kubelet[2511]: E1008 19:56:09.398645 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.398814 kubelet[2511]: E1008 19:56:09.398802 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.398837 kubelet[2511]: W1008 19:56:09.398814 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.398837 kubelet[2511]: E1008 19:56:09.398831 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.399577 kubelet[2511]: E1008 19:56:09.399562 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.399577 kubelet[2511]: W1008 19:56:09.399574 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.399809 kubelet[2511]: E1008 19:56:09.399591 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.399809 kubelet[2511]: E1008 19:56:09.399796 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.399809 kubelet[2511]: W1008 19:56:09.399804 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.399888 kubelet[2511]: E1008 19:56:09.399866 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.399983 kubelet[2511]: E1008 19:56:09.399973 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.400008 kubelet[2511]: W1008 19:56:09.399983 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.400045 kubelet[2511]: E1008 19:56:09.400028 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.400142 kubelet[2511]: E1008 19:56:09.400130 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.400163 kubelet[2511]: W1008 19:56:09.400141 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.400163 kubelet[2511]: E1008 19:56:09.400152 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.402733 kubelet[2511]: E1008 19:56:09.402702 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.402733 kubelet[2511]: W1008 19:56:09.402724 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.402842 kubelet[2511]: E1008 19:56:09.402745 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.403061 kubelet[2511]: E1008 19:56:09.403010 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.403061 kubelet[2511]: W1008 19:56:09.403046 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.403061 kubelet[2511]: E1008 19:56:09.403060 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.403364 kubelet[2511]: E1008 19:56:09.403322 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.403364 kubelet[2511]: W1008 19:56:09.403336 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.403364 kubelet[2511]: E1008 19:56:09.403349 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.403919 kubelet[2511]: E1008 19:56:09.403878 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:56:09.403919 kubelet[2511]: W1008 19:56:09.403891 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:56:09.403919 kubelet[2511]: E1008 19:56:09.403911 2511 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:56:09.871068 containerd[1438]: time="2024-10-08T19:56:09.870588005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:09.872162 containerd[1438]: time="2024-10-08T19:56:09.872115144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 8 19:56:09.873361 containerd[1438]: time="2024-10-08T19:56:09.873205318Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:09.877310 containerd[1438]: time="2024-10-08T19:56:09.877280770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:09.878224 containerd[1438]: time="2024-10-08T19:56:09.878099820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.03643515s" Oct 8 19:56:09.878224 containerd[1438]: time="2024-10-08T19:56:09.878138180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 8 19:56:09.881036 containerd[1438]: time="2024-10-08T19:56:09.880994816Z" level=info msg="CreateContainer within sandbox \"8c09671fa4fc0938a1c272354bbb006f3b6fa3a810fb0d66b7aa3a1bab2775a3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:56:09.906275 containerd[1438]: time="2024-10-08T19:56:09.906228135Z" level=info msg="CreateContainer within sandbox \"8c09671fa4fc0938a1c272354bbb006f3b6fa3a810fb0d66b7aa3a1bab2775a3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ac34a8dbad3f89da7ea034a0e52795910100b3d99d2c3e5d09244577b3a0b3c0\"" Oct 8 19:56:09.906750 containerd[1438]: time="2024-10-08T19:56:09.906724622Z" level=info msg="StartContainer for \"ac34a8dbad3f89da7ea034a0e52795910100b3d99d2c3e5d09244577b3a0b3c0\"" Oct 8 19:56:09.941477 systemd[1]: Started cri-containerd-ac34a8dbad3f89da7ea034a0e52795910100b3d99d2c3e5d09244577b3a0b3c0.scope - libcontainer container ac34a8dbad3f89da7ea034a0e52795910100b3d99d2c3e5d09244577b3a0b3c0. Oct 8 19:56:09.964884 containerd[1438]: time="2024-10-08T19:56:09.964839436Z" level=info msg="StartContainer for \"ac34a8dbad3f89da7ea034a0e52795910100b3d99d2c3e5d09244577b3a0b3c0\" returns successfully" Oct 8 19:56:09.992803 systemd[1]: cri-containerd-ac34a8dbad3f89da7ea034a0e52795910100b3d99d2c3e5d09244577b3a0b3c0.scope: Deactivated successfully. Oct 8 19:56:10.018525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac34a8dbad3f89da7ea034a0e52795910100b3d99d2c3e5d09244577b3a0b3c0-rootfs.mount: Deactivated successfully. Oct 8 19:56:10.036926 containerd[1438]: time="2024-10-08T19:56:10.030623893Z" level=info msg="shim disconnected" id=ac34a8dbad3f89da7ea034a0e52795910100b3d99d2c3e5d09244577b3a0b3c0 namespace=k8s.io Oct 8 19:56:10.036926 containerd[1438]: time="2024-10-08T19:56:10.036921649Z" level=warning msg="cleaning up after shim disconnected" id=ac34a8dbad3f89da7ea034a0e52795910100b3d99d2c3e5d09244577b3a0b3c0 namespace=k8s.io Oct 8 19:56:10.036926 containerd[1438]: time="2024-10-08T19:56:10.036936169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:56:10.222055 kubelet[2511]: E1008 19:56:10.222003 2511 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8q6v8" podUID="b6bbb92e-10e0-4d4e-8c4d-e05b88c82846" Oct 8 19:56:10.304206 kubelet[2511]: E1008 19:56:10.304161 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:10.305738 kubelet[2511]: I1008 19:56:10.305663 2511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:56:10.305906 containerd[1438]: time="2024-10-08T19:56:10.305803108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:56:10.306245 kubelet[2511]: E1008 19:56:10.306222 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:12.222120 kubelet[2511]: E1008 19:56:12.221841 2511 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8q6v8" podUID="b6bbb92e-10e0-4d4e-8c4d-e05b88c82846" Oct 8 19:56:14.222601 kubelet[2511]: E1008 19:56:14.222549 2511 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8q6v8" podUID="b6bbb92e-10e0-4d4e-8c4d-e05b88c82846" Oct 8 19:56:14.261573 containerd[1438]: time="2024-10-08T19:56:14.261522498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:14.262155 containerd[1438]: time="2024-10-08T19:56:14.262108424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 8 19:56:14.262930 containerd[1438]: time="2024-10-08T19:56:14.262900072Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:14.265506 containerd[1438]: time="2024-10-08T19:56:14.265471419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:14.266113 containerd[1438]: time="2024-10-08T19:56:14.266076705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 3.960241677s" Oct 8 19:56:14.266144 containerd[1438]: time="2024-10-08T19:56:14.266110705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 8 19:56:14.267967 containerd[1438]: time="2024-10-08T19:56:14.267936924Z" level=info msg="CreateContainer within sandbox \"8c09671fa4fc0938a1c272354bbb006f3b6fa3a810fb0d66b7aa3a1bab2775a3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:56:14.284184 containerd[1438]: time="2024-10-08T19:56:14.284112171Z" level=info msg="CreateContainer within sandbox \"8c09671fa4fc0938a1c272354bbb006f3b6fa3a810fb0d66b7aa3a1bab2775a3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"47c9293dc65770b5a4e3c2d31d4f145bf09952f6f1da67a631da8defb254230e\"" Oct 8 19:56:14.288905 containerd[1438]: time="2024-10-08T19:56:14.288851900Z" level=info msg="StartContainer for \"47c9293dc65770b5a4e3c2d31d4f145bf09952f6f1da67a631da8defb254230e\"" Oct 8 19:56:14.318468 systemd[1]: Started cri-containerd-47c9293dc65770b5a4e3c2d31d4f145bf09952f6f1da67a631da8defb254230e.scope - libcontainer container 47c9293dc65770b5a4e3c2d31d4f145bf09952f6f1da67a631da8defb254230e. Oct 8 19:56:14.341114 containerd[1438]: time="2024-10-08T19:56:14.341072200Z" level=info msg="StartContainer for \"47c9293dc65770b5a4e3c2d31d4f145bf09952f6f1da67a631da8defb254230e\" returns successfully" Oct 8 19:56:14.810571 containerd[1438]: time="2024-10-08T19:56:14.810526656Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:56:14.812651 systemd[1]: cri-containerd-47c9293dc65770b5a4e3c2d31d4f145bf09952f6f1da67a631da8defb254230e.scope: Deactivated successfully. Oct 8 19:56:14.815983 kubelet[2511]: I1008 19:56:14.815906 2511 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:56:14.831683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47c9293dc65770b5a4e3c2d31d4f145bf09952f6f1da67a631da8defb254230e-rootfs.mount: Deactivated successfully. Oct 8 19:56:14.856266 kubelet[2511]: I1008 19:56:14.855027 2511 topology_manager.go:215] "Topology Admit Handler" podUID="f2c83810-f6a6-4339-9963-d6170097f801" podNamespace="kube-system" podName="coredns-76f75df574-nrrhj" Oct 8 19:56:14.856266 kubelet[2511]: I1008 19:56:14.855341 2511 topology_manager.go:215] "Topology Admit Handler" podUID="7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd" podNamespace="kube-system" podName="coredns-76f75df574-bsxtp" Oct 8 19:56:14.856266 kubelet[2511]: I1008 19:56:14.855963 2511 topology_manager.go:215] "Topology Admit Handler" podUID="4228eb51-05d3-4c22-81c3-f7cee202ab67" podNamespace="calico-system" podName="calico-kube-controllers-7fcfcb6ccd-6v9n9" Oct 8 19:56:14.867845 systemd[1]: Created slice kubepods-burstable-podf2c83810_f6a6_4339_9963_d6170097f801.slice - libcontainer container kubepods-burstable-podf2c83810_f6a6_4339_9963_d6170097f801.slice. Oct 8 19:56:14.877419 systemd[1]: Created slice kubepods-burstable-pod7a056a3c_fdd6_4ec5_aafb_275c20cfa1fd.slice - libcontainer container kubepods-burstable-pod7a056a3c_fdd6_4ec5_aafb_275c20cfa1fd.slice. Oct 8 19:56:14.882728 systemd[1]: Created slice kubepods-besteffort-pod4228eb51_05d3_4c22_81c3_f7cee202ab67.slice - libcontainer container kubepods-besteffort-pod4228eb51_05d3_4c22_81c3_f7cee202ab67.slice. Oct 8 19:56:14.914353 containerd[1438]: time="2024-10-08T19:56:14.914285969Z" level=info msg="shim disconnected" id=47c9293dc65770b5a4e3c2d31d4f145bf09952f6f1da67a631da8defb254230e namespace=k8s.io Oct 8 19:56:14.914353 containerd[1438]: time="2024-10-08T19:56:14.914351170Z" level=warning msg="cleaning up after shim disconnected" id=47c9293dc65770b5a4e3c2d31d4f145bf09952f6f1da67a631da8defb254230e namespace=k8s.io Oct 8 19:56:14.914505 containerd[1438]: time="2024-10-08T19:56:14.914364650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:56:14.935523 kubelet[2511]: I1008 19:56:14.935426 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js5zd\" (UniqueName: \"kubernetes.io/projected/7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd-kube-api-access-js5zd\") pod \"coredns-76f75df574-bsxtp\" (UID: \"7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd\") " pod="kube-system/coredns-76f75df574-bsxtp" Oct 8 19:56:14.935523 kubelet[2511]: I1008 19:56:14.935476 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2c83810-f6a6-4339-9963-d6170097f801-config-volume\") pod \"coredns-76f75df574-nrrhj\" (UID: \"f2c83810-f6a6-4339-9963-d6170097f801\") " pod="kube-system/coredns-76f75df574-nrrhj" Oct 8 19:56:14.935523 kubelet[2511]: I1008 19:56:14.935501 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd-config-volume\") pod \"coredns-76f75df574-bsxtp\" (UID: \"7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd\") " pod="kube-system/coredns-76f75df574-bsxtp" Oct 8 19:56:14.935832 kubelet[2511]: I1008 19:56:14.935570 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj7x2\" (UniqueName: \"kubernetes.io/projected/4228eb51-05d3-4c22-81c3-f7cee202ab67-kube-api-access-sj7x2\") pod \"calico-kube-controllers-7fcfcb6ccd-6v9n9\" (UID: \"4228eb51-05d3-4c22-81c3-f7cee202ab67\") " pod="calico-system/calico-kube-controllers-7fcfcb6ccd-6v9n9" Oct 8 19:56:14.935832 kubelet[2511]: I1008 19:56:14.935620 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7fmb\" (UniqueName: \"kubernetes.io/projected/f2c83810-f6a6-4339-9963-d6170097f801-kube-api-access-m7fmb\") pod \"coredns-76f75df574-nrrhj\" (UID: \"f2c83810-f6a6-4339-9963-d6170097f801\") " pod="kube-system/coredns-76f75df574-nrrhj" Oct 8 19:56:14.935832 kubelet[2511]: I1008 19:56:14.935699 2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4228eb51-05d3-4c22-81c3-f7cee202ab67-tigera-ca-bundle\") pod \"calico-kube-controllers-7fcfcb6ccd-6v9n9\" (UID: \"4228eb51-05d3-4c22-81c3-f7cee202ab67\") " pod="calico-system/calico-kube-controllers-7fcfcb6ccd-6v9n9" Oct 8 19:56:15.213021 kubelet[2511]: E1008 19:56:15.212544 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:15.213411 containerd[1438]: time="2024-10-08T19:56:15.213377382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nrrhj,Uid:f2c83810-f6a6-4339-9963-d6170097f801,Namespace:kube-system,Attempt:0,}" Oct 8 19:56:15.213514 kubelet[2511]: E1008 19:56:15.213420 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:15.214027 containerd[1438]: time="2024-10-08T19:56:15.213736946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bsxtp,Uid:7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd,Namespace:kube-system,Attempt:0,}" Oct 8 19:56:15.214027 containerd[1438]: time="2024-10-08T19:56:15.213796906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcfcb6ccd-6v9n9,Uid:4228eb51-05d3-4c22-81c3-f7cee202ab67,Namespace:calico-system,Attempt:0,}" Oct 8 19:56:15.345174 kubelet[2511]: E1008 19:56:15.343421 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:15.358354 containerd[1438]: time="2024-10-08T19:56:15.350612790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:56:15.603566 containerd[1438]: time="2024-10-08T19:56:15.603001025Z" level=error msg="Failed to destroy network for sandbox \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:15.603566 containerd[1438]: time="2024-10-08T19:56:15.603344788Z" level=error msg="encountered an error cleaning up failed sandbox \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:15.603566 containerd[1438]: time="2024-10-08T19:56:15.603435389Z" level=error msg="Failed to destroy network for sandbox \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:15.607509 containerd[1438]: time="2024-10-08T19:56:15.603796713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bsxtp,Uid:7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:15.607509 containerd[1438]: time="2024-10-08T19:56:15.604105516Z" level=error msg="encountered an error cleaning up failed sandbox \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:15.607509 containerd[1438]: time="2024-10-08T19:56:15.604149956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nrrhj,Uid:f2c83810-f6a6-4339-9963-d6170097f801,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:15.605327 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190-shm.mount: Deactivated successfully. Oct 8 19:56:15.605415 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d-shm.mount: Deactivated successfully. Oct 8 19:56:15.614159 kubelet[2511]: E1008 19:56:15.614120 2511 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:15.614261 kubelet[2511]: E1008 19:56:15.614201 2511 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-nrrhj" Oct 8 19:56:15.614261 kubelet[2511]: E1008 19:56:15.614222 2511 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-nrrhj" Oct 8 19:56:15.614309 kubelet[2511]: E1008 19:56:15.614279 2511 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-nrrhj_kube-system(f2c83810-f6a6-4339-9963-d6170097f801)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-nrrhj_kube-system(f2c83810-f6a6-4339-9963-d6170097f801)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-nrrhj" podUID="f2c83810-f6a6-4339-9963-d6170097f801" Oct 8 19:56:15.614727 kubelet[2511]: E1008 19:56:15.614704 2511 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:15.614789 kubelet[2511]: E1008 19:56:15.614744 2511 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bsxtp" Oct 8 19:56:15.614789 kubelet[2511]: E1008 19:56:15.614769 2511 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bsxtp" Oct 8 19:56:15.614836 kubelet[2511]: E1008 19:56:15.614810 2511 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-bsxtp_kube-system(7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-bsxtp_kube-system(7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bsxtp" podUID="7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd" Oct 8 19:56:15.617848 containerd[1438]: time="2024-10-08T19:56:15.617659731Z" level=error msg="Failed to destroy network for sandbox \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:15.618274 containerd[1438]: time="2024-10-08T19:56:15.618235857Z" level=error msg="encountered an error cleaning up failed sandbox \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:15.618349 containerd[1438]: time="2024-10-08T19:56:15.618293177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcfcb6ccd-6v9n9,Uid:4228eb51-05d3-4c22-81c3-f7cee202ab67,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:15.619962 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc-shm.mount: Deactivated successfully. Oct 8 19:56:15.620986 kubelet[2511]: E1008 19:56:15.620755 2511 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:15.620986 kubelet[2511]: E1008 19:56:15.620795 2511 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fcfcb6ccd-6v9n9" Oct 8 19:56:15.620986 kubelet[2511]: E1008 19:56:15.620813 2511 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fcfcb6ccd-6v9n9" Oct 8 19:56:15.621090 kubelet[2511]: E1008 19:56:15.620855 2511 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fcfcb6ccd-6v9n9_calico-system(4228eb51-05d3-4c22-81c3-f7cee202ab67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fcfcb6ccd-6v9n9_calico-system(4228eb51-05d3-4c22-81c3-f7cee202ab67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fcfcb6ccd-6v9n9" podUID="4228eb51-05d3-4c22-81c3-f7cee202ab67" Oct 8 19:56:15.992145 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:52982.service - OpenSSH per-connection server daemon (10.0.0.1:52982). Oct 8 19:56:16.039795 sshd[3371]: Accepted publickey for core from 10.0.0.1 port 52982 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:56:16.040932 sshd[3371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:56:16.044484 systemd-logind[1419]: New session 8 of user core. Oct 8 19:56:16.052450 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:56:16.164597 sshd[3371]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:16.167692 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:52982.service: Deactivated successfully. Oct 8 19:56:16.169330 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:56:16.169888 systemd-logind[1419]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:56:16.170731 systemd-logind[1419]: Removed session 8. Oct 8 19:56:16.226707 systemd[1]: Created slice kubepods-besteffort-podb6bbb92e_10e0_4d4e_8c4d_e05b88c82846.slice - libcontainer container kubepods-besteffort-podb6bbb92e_10e0_4d4e_8c4d_e05b88c82846.slice. Oct 8 19:56:16.228828 containerd[1438]: time="2024-10-08T19:56:16.228717780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8q6v8,Uid:b6bbb92e-10e0-4d4e-8c4d-e05b88c82846,Namespace:calico-system,Attempt:0,}" Oct 8 19:56:16.280871 containerd[1438]: time="2024-10-08T19:56:16.280731640Z" level=error msg="Failed to destroy network for sandbox \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:16.281092 containerd[1438]: time="2024-10-08T19:56:16.281050163Z" level=error msg="encountered an error cleaning up failed sandbox \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:16.281134 containerd[1438]: time="2024-10-08T19:56:16.281112524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8q6v8,Uid:b6bbb92e-10e0-4d4e-8c4d-e05b88c82846,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:16.281383 kubelet[2511]: E1008 19:56:16.281348 2511 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:16.281429 kubelet[2511]: E1008 19:56:16.281403 2511 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8q6v8" Oct 8 19:56:16.281429 kubelet[2511]: E1008 19:56:16.281424 2511 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8q6v8" Oct 8 19:56:16.281520 kubelet[2511]: E1008 19:56:16.281472 2511 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8q6v8_calico-system(b6bbb92e-10e0-4d4e-8c4d-e05b88c82846)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8q6v8_calico-system(b6bbb92e-10e0-4d4e-8c4d-e05b88c82846)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8q6v8" podUID="b6bbb92e-10e0-4d4e-8c4d-e05b88c82846" Oct 8 19:56:16.287210 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070-shm.mount: Deactivated successfully. Oct 8 19:56:16.348138 kubelet[2511]: I1008 19:56:16.347897 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Oct 8 19:56:16.349245 containerd[1438]: time="2024-10-08T19:56:16.348592052Z" level=info msg="StopPodSandbox for \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\"" Oct 8 19:56:16.349245 containerd[1438]: time="2024-10-08T19:56:16.348796774Z" level=info msg="Ensure that sandbox 848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc in task-service has been cleanup successfully" Oct 8 19:56:16.349697 kubelet[2511]: I1008 19:56:16.349655 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Oct 8 19:56:16.350869 kubelet[2511]: I1008 19:56:16.350818 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Oct 8 19:56:16.351463 containerd[1438]: time="2024-10-08T19:56:16.351064756Z" level=info msg="StopPodSandbox for \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\"" Oct 8 19:56:16.351463 containerd[1438]: time="2024-10-08T19:56:16.351250038Z" level=info msg="Ensure that sandbox 98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d in task-service has been cleanup successfully" Oct 8 19:56:16.351561 containerd[1438]: time="2024-10-08T19:56:16.351498680Z" level=info msg="StopPodSandbox for \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\"" Oct 8 19:56:16.351694 containerd[1438]: time="2024-10-08T19:56:16.351666802Z" level=info msg="Ensure that sandbox c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190 in task-service has been cleanup successfully" Oct 8 19:56:16.352372 kubelet[2511]: I1008 19:56:16.352348 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Oct 8 19:56:16.352858 containerd[1438]: time="2024-10-08T19:56:16.352826773Z" level=info msg="StopPodSandbox for \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\"" Oct 8 19:56:16.353128 containerd[1438]: time="2024-10-08T19:56:16.353069655Z" level=info msg="Ensure that sandbox 1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070 in task-service has been cleanup successfully" Oct 8 19:56:16.380973 containerd[1438]: time="2024-10-08T19:56:16.380922363Z" level=error msg="StopPodSandbox for \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\" failed" error="failed to destroy network for sandbox \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:16.387901 kubelet[2511]: E1008 19:56:16.387847 2511 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Oct 8 19:56:16.388060 kubelet[2511]: E1008 19:56:16.387961 2511 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc"} Oct 8 19:56:16.388060 kubelet[2511]: E1008 19:56:16.388014 2511 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4228eb51-05d3-4c22-81c3-f7cee202ab67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:56:16.388060 kubelet[2511]: E1008 19:56:16.388044 2511 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4228eb51-05d3-4c22-81c3-f7cee202ab67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fcfcb6ccd-6v9n9" podUID="4228eb51-05d3-4c22-81c3-f7cee202ab67" Oct 8 19:56:16.389149 containerd[1438]: time="2024-10-08T19:56:16.388934920Z" level=error msg="StopPodSandbox for \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\" failed" error="failed to destroy network for sandbox \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:16.389240 kubelet[2511]: E1008 19:56:16.389154 2511 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Oct 8 19:56:16.389240 kubelet[2511]: E1008 19:56:16.389184 2511 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d"} Oct 8 19:56:16.389240 kubelet[2511]: E1008 19:56:16.389216 2511 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2c83810-f6a6-4339-9963-d6170097f801\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:56:16.389367 kubelet[2511]: E1008 19:56:16.389241 2511 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2c83810-f6a6-4339-9963-d6170097f801\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-nrrhj" podUID="f2c83810-f6a6-4339-9963-d6170097f801" Oct 8 19:56:16.392154 containerd[1438]: time="2024-10-08T19:56:16.392087110Z" level=error msg="StopPodSandbox for \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\" failed" error="failed to destroy network for sandbox \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:16.393282 kubelet[2511]: E1008 19:56:16.392385 2511 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Oct 8 19:56:16.393282 kubelet[2511]: E1008 19:56:16.392422 2511 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190"} Oct 8 19:56:16.393282 kubelet[2511]: E1008 19:56:16.392452 2511 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:56:16.393282 kubelet[2511]: E1008 19:56:16.392475 2511 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bsxtp" podUID="7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd" Oct 8 19:56:16.394547 containerd[1438]: time="2024-10-08T19:56:16.394516334Z" level=error msg="StopPodSandbox for \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\" failed" error="failed to destroy network for sandbox \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:56:16.394827 kubelet[2511]: E1008 19:56:16.394792 2511 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Oct 8 19:56:16.394827 kubelet[2511]: E1008 19:56:16.394822 2511 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070"} Oct 8 19:56:16.394923 kubelet[2511]: E1008 19:56:16.394851 2511 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6bbb92e-10e0-4d4e-8c4d-e05b88c82846\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:56:16.394923 kubelet[2511]: E1008 19:56:16.394888 2511 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6bbb92e-10e0-4d4e-8c4d-e05b88c82846\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8q6v8" podUID="b6bbb92e-10e0-4d4e-8c4d-e05b88c82846" Oct 8 19:56:19.073977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1361543923.mount: Deactivated successfully. Oct 8 19:56:19.330441 containerd[1438]: time="2024-10-08T19:56:19.330283671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:19.331057 containerd[1438]: time="2024-10-08T19:56:19.330905876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 8 19:56:19.331903 containerd[1438]: time="2024-10-08T19:56:19.331838004Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:19.333742 containerd[1438]: time="2024-10-08T19:56:19.333709380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:19.334473 containerd[1438]: time="2024-10-08T19:56:19.334336466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 3.983685516s" Oct 8 19:56:19.334473 containerd[1438]: time="2024-10-08T19:56:19.334367306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 8 19:56:19.341483 containerd[1438]: time="2024-10-08T19:56:19.341438967Z" level=info msg="CreateContainer within sandbox \"8c09671fa4fc0938a1c272354bbb006f3b6fa3a810fb0d66b7aa3a1bab2775a3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:56:19.361363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314843099.mount: Deactivated successfully. Oct 8 19:56:19.362084 containerd[1438]: time="2024-10-08T19:56:19.361657303Z" level=info msg="CreateContainer within sandbox \"8c09671fa4fc0938a1c272354bbb006f3b6fa3a810fb0d66b7aa3a1bab2775a3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8227c492ced633623ec5b72d9426ec0612b86d162c22a7e88a6f4854be1b9cef\"" Oct 8 19:56:19.362654 containerd[1438]: time="2024-10-08T19:56:19.362606991Z" level=info msg="StartContainer for \"8227c492ced633623ec5b72d9426ec0612b86d162c22a7e88a6f4854be1b9cef\"" Oct 8 19:56:19.443522 systemd[1]: Started cri-containerd-8227c492ced633623ec5b72d9426ec0612b86d162c22a7e88a6f4854be1b9cef.scope - libcontainer container 8227c492ced633623ec5b72d9426ec0612b86d162c22a7e88a6f4854be1b9cef. Oct 8 19:56:19.478490 containerd[1438]: time="2024-10-08T19:56:19.478427996Z" level=info msg="StartContainer for \"8227c492ced633623ec5b72d9426ec0612b86d162c22a7e88a6f4854be1b9cef\" returns successfully" Oct 8 19:56:19.691351 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:56:19.691489 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:56:20.361632 kubelet[2511]: E1008 19:56:20.361569 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:20.374943 kubelet[2511]: I1008 19:56:20.374690 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-kktvz" podStartSLOduration=2.249110062 podStartE2EDuration="14.374653312s" podCreationTimestamp="2024-10-08 19:56:06 +0000 UTC" firstStartedPulling="2024-10-08 19:56:07.209193299 +0000 UTC m=+22.089015968" lastFinishedPulling="2024-10-08 19:56:19.334736549 +0000 UTC m=+34.214559218" observedRunningTime="2024-10-08 19:56:20.374619551 +0000 UTC m=+35.254442220" watchObservedRunningTime="2024-10-08 19:56:20.374653312 +0000 UTC m=+35.254475981" Oct 8 19:56:21.176428 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:52992.service - OpenSSH per-connection server daemon (10.0.0.1:52992). Oct 8 19:56:21.220102 sshd[3692]: Accepted publickey for core from 10.0.0.1 port 52992 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:56:21.221918 sshd[3692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:56:21.227009 systemd-logind[1419]: New session 9 of user core. Oct 8 19:56:21.233552 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:56:21.362836 kubelet[2511]: I1008 19:56:21.362753 2511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:56:21.364060 kubelet[2511]: E1008 19:56:21.363623 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:21.444773 sshd[3692]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:21.448706 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:52992.service: Deactivated successfully. Oct 8 19:56:21.450482 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:56:21.451138 systemd-logind[1419]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:56:21.452601 systemd-logind[1419]: Removed session 9. Oct 8 19:56:26.458212 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:60360.service - OpenSSH per-connection server daemon (10.0.0.1:60360). Oct 8 19:56:26.498114 sshd[3832]: Accepted publickey for core from 10.0.0.1 port 60360 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:56:26.499528 sshd[3832]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:56:26.503613 systemd-logind[1419]: New session 10 of user core. Oct 8 19:56:26.509448 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:56:26.627255 sshd[3832]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:26.638845 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:60360.service: Deactivated successfully. Oct 8 19:56:26.640267 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:56:26.641869 systemd-logind[1419]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:56:26.649565 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:60374.service - OpenSSH per-connection server daemon (10.0.0.1:60374). Oct 8 19:56:26.650430 systemd-logind[1419]: Removed session 10. Oct 8 19:56:26.681104 sshd[3847]: Accepted publickey for core from 10.0.0.1 port 60374 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:56:26.681549 sshd[3847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:56:26.685370 systemd-logind[1419]: New session 11 of user core. Oct 8 19:56:26.696476 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:56:26.856192 sshd[3847]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:26.869166 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:60374.service: Deactivated successfully. Oct 8 19:56:26.871544 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:56:26.873386 systemd-logind[1419]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:56:26.884078 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:60384.service - OpenSSH per-connection server daemon (10.0.0.1:60384). Oct 8 19:56:26.885696 systemd-logind[1419]: Removed session 11. Oct 8 19:56:26.920522 sshd[3860]: Accepted publickey for core from 10.0.0.1 port 60384 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:56:26.921920 sshd[3860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:56:26.925996 systemd-logind[1419]: New session 12 of user core. Oct 8 19:56:26.932454 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:56:27.055121 sshd[3860]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:27.058212 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:60384.service: Deactivated successfully. Oct 8 19:56:27.060541 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:56:27.062290 systemd-logind[1419]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:56:27.063180 systemd-logind[1419]: Removed session 12. Oct 8 19:56:27.063833 kubelet[2511]: I1008 19:56:27.063795 2511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:56:27.064446 kubelet[2511]: E1008 19:56:27.064425 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:27.223192 containerd[1438]: time="2024-10-08T19:56:27.222711130Z" level=info msg="StopPodSandbox for \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\"" Oct 8 19:56:27.388669 kubelet[2511]: E1008 19:56:27.388583 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:27.443480 kernel: bpftool[3942]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:56:27.548580 containerd[1438]: 2024-10-08 19:56:27.370 [INFO][3894] k8s.go 608: Cleaning up netns ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Oct 8 19:56:27.548580 containerd[1438]: 2024-10-08 19:56:27.374 [INFO][3894] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" iface="eth0" netns="/var/run/netns/cni-b67b9b77-e01e-46b4-cc1e-3da724c4e3d3" Oct 8 19:56:27.548580 containerd[1438]: 2024-10-08 19:56:27.375 [INFO][3894] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" iface="eth0" netns="/var/run/netns/cni-b67b9b77-e01e-46b4-cc1e-3da724c4e3d3" Oct 8 19:56:27.548580 containerd[1438]: 2024-10-08 19:56:27.380 [INFO][3894] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" iface="eth0" netns="/var/run/netns/cni-b67b9b77-e01e-46b4-cc1e-3da724c4e3d3" Oct 8 19:56:27.548580 containerd[1438]: 2024-10-08 19:56:27.381 [INFO][3894] k8s.go 615: Releasing IP address(es) ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Oct 8 19:56:27.548580 containerd[1438]: 2024-10-08 19:56:27.384 [INFO][3894] utils.go 188: Calico CNI releasing IP address ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Oct 8 19:56:27.548580 containerd[1438]: 2024-10-08 19:56:27.525 [INFO][3911] ipam_plugin.go 417: Releasing address using handleID ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" HandleID="k8s-pod-network.c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Workload="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:27.548580 containerd[1438]: 2024-10-08 19:56:27.525 [INFO][3911] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:27.548580 containerd[1438]: 2024-10-08 19:56:27.525 [INFO][3911] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:27.548580 containerd[1438]: 2024-10-08 19:56:27.543 [WARNING][3911] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" HandleID="k8s-pod-network.c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Workload="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:27.548580 containerd[1438]: 2024-10-08 19:56:27.543 [INFO][3911] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" HandleID="k8s-pod-network.c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Workload="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:27.548580 containerd[1438]: 2024-10-08 19:56:27.544 [INFO][3911] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:27.548580 containerd[1438]: 2024-10-08 19:56:27.546 [INFO][3894] k8s.go 621: Teardown processing complete. ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Oct 8 19:56:27.549499 containerd[1438]: time="2024-10-08T19:56:27.548884866Z" level=info msg="TearDown network for sandbox \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\" successfully" Oct 8 19:56:27.550055 containerd[1438]: time="2024-10-08T19:56:27.549491430Z" level=info msg="StopPodSandbox for \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\" returns successfully" Oct 8 19:56:27.551911 kubelet[2511]: E1008 19:56:27.551877 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:27.552892 containerd[1438]: time="2024-10-08T19:56:27.552756452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bsxtp,Uid:7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd,Namespace:kube-system,Attempt:1,}" Oct 8 19:56:27.553361 systemd[1]: run-netns-cni\x2db67b9b77\x2de01e\x2d46b4\x2dcc1e\x2d3da724c4e3d3.mount: Deactivated successfully. Oct 8 19:56:27.719164 systemd-networkd[1371]: vxlan.calico: Link UP Oct 8 19:56:27.719172 systemd-networkd[1371]: vxlan.calico: Gained carrier Oct 8 19:56:27.739264 systemd-networkd[1371]: cali9ff0080c38b: Link UP Oct 8 19:56:27.739697 systemd-networkd[1371]: cali9ff0080c38b: Gained carrier Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.640 [INFO][3974] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--bsxtp-eth0 coredns-76f75df574- kube-system 7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd 805 0 2024-10-08 19:56:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-bsxtp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9ff0080c38b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" Namespace="kube-system" Pod="coredns-76f75df574-bsxtp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bsxtp-" Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.640 [INFO][3974] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" Namespace="kube-system" Pod="coredns-76f75df574-bsxtp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.677 [INFO][3998] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" HandleID="k8s-pod-network.d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" Workload="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.692 [INFO][3998] ipam_plugin.go 270: Auto assigning IP ContainerID="d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" HandleID="k8s-pod-network.d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" Workload="localhost-k8s-coredns--76f75df574--bsxtp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058fc60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-bsxtp", "timestamp":"2024-10-08 19:56:27.677573676 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.692 [INFO][3998] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.692 [INFO][3998] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.692 [INFO][3998] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.694 [INFO][3998] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" host="localhost" Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.699 [INFO][3998] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.706 [INFO][3998] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.709 [INFO][3998] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.715 [INFO][3998] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.715 [INFO][3998] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" host="localhost" Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.718 [INFO][3998] ipam.go 1685: Creating new handle: k8s-pod-network.d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9 Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.723 [INFO][3998] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" host="localhost" Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.731 [INFO][3998] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" host="localhost" Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.731 [INFO][3998] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" host="localhost" Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.731 [INFO][3998] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:27.757572 containerd[1438]: 2024-10-08 19:56:27.731 [INFO][3998] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" HandleID="k8s-pod-network.d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" Workload="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:27.758862 containerd[1438]: 2024-10-08 19:56:27.735 [INFO][3974] k8s.go 386: Populated endpoint ContainerID="d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" Namespace="kube-system" Pod="coredns-76f75df574-bsxtp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bsxtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bsxtp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-bsxtp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ff0080c38b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:27.758862 containerd[1438]: 2024-10-08 19:56:27.735 [INFO][3974] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" Namespace="kube-system" Pod="coredns-76f75df574-bsxtp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:27.758862 containerd[1438]: 2024-10-08 19:56:27.735 [INFO][3974] dataplane_linux.go 68: Setting the host side veth name to cali9ff0080c38b ContainerID="d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" Namespace="kube-system" Pod="coredns-76f75df574-bsxtp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:27.758862 containerd[1438]: 2024-10-08 19:56:27.739 [INFO][3974] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" Namespace="kube-system" Pod="coredns-76f75df574-bsxtp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:27.758862 containerd[1438]: 2024-10-08 19:56:27.739 [INFO][3974] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" Namespace="kube-system" Pod="coredns-76f75df574-bsxtp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bsxtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bsxtp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9", Pod:"coredns-76f75df574-bsxtp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ff0080c38b", MAC:"c2:da:ab:02:96:2b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:27.758862 containerd[1438]: 2024-10-08 19:56:27.753 [INFO][3974] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9" Namespace="kube-system" Pod="coredns-76f75df574-bsxtp" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:27.779988 containerd[1438]: time="2024-10-08T19:56:27.779431140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:27.779988 containerd[1438]: time="2024-10-08T19:56:27.779542261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:27.779988 containerd[1438]: time="2024-10-08T19:56:27.779579421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:27.779988 containerd[1438]: time="2024-10-08T19:56:27.779592221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:27.803545 systemd[1]: Started cri-containerd-d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9.scope - libcontainer container d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9. Oct 8 19:56:27.819512 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:56:27.836289 containerd[1438]: time="2024-10-08T19:56:27.836199653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bsxtp,Uid:7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd,Namespace:kube-system,Attempt:1,} returns sandbox id \"d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9\"" Oct 8 19:56:27.838361 kubelet[2511]: E1008 19:56:27.837094 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:27.843604 containerd[1438]: time="2024-10-08T19:56:27.843384182Z" level=info msg="CreateContainer within sandbox \"d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:56:27.856027 containerd[1438]: time="2024-10-08T19:56:27.855970709Z" level=info msg="CreateContainer within sandbox \"d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4c158c9164530e8d36f8346c1fd3d19f2abc2b4ba354c6b14891f99de8a5a76\"" Oct 8 19:56:27.857435 containerd[1438]: time="2024-10-08T19:56:27.857383319Z" level=info msg="StartContainer for \"d4c158c9164530e8d36f8346c1fd3d19f2abc2b4ba354c6b14891f99de8a5a76\"" Oct 8 19:56:27.885880 systemd[1]: Started cri-containerd-d4c158c9164530e8d36f8346c1fd3d19f2abc2b4ba354c6b14891f99de8a5a76.scope - libcontainer container d4c158c9164530e8d36f8346c1fd3d19f2abc2b4ba354c6b14891f99de8a5a76. Oct 8 19:56:27.919651 containerd[1438]: time="2024-10-08T19:56:27.919599349Z" level=info msg="StartContainer for \"d4c158c9164530e8d36f8346c1fd3d19f2abc2b4ba354c6b14891f99de8a5a76\" returns successfully" Oct 8 19:56:28.396174 kubelet[2511]: E1008 19:56:28.394175 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:28.413412 kubelet[2511]: I1008 19:56:28.413373 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-bsxtp" podStartSLOduration=28.413329497 podStartE2EDuration="28.413329497s" podCreationTimestamp="2024-10-08 19:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:56:28.412988375 +0000 UTC m=+43.292811044" watchObservedRunningTime="2024-10-08 19:56:28.413329497 +0000 UTC m=+43.293152166" Oct 8 19:56:28.938431 kubelet[2511]: I1008 19:56:28.938367 2511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:56:28.939248 kubelet[2511]: E1008 19:56:28.939213 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:28.944460 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Oct 8 19:56:29.223400 containerd[1438]: time="2024-10-08T19:56:29.223274252Z" level=info msg="StopPodSandbox for \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\"" Oct 8 19:56:29.304917 containerd[1438]: 2024-10-08 19:56:29.270 [INFO][4226] k8s.go 608: Cleaning up netns ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Oct 8 19:56:29.304917 containerd[1438]: 2024-10-08 19:56:29.271 [INFO][4226] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" iface="eth0" netns="/var/run/netns/cni-47c21888-2e5d-806e-2dbb-9d0af3ae576a" Oct 8 19:56:29.304917 containerd[1438]: 2024-10-08 19:56:29.272 [INFO][4226] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" iface="eth0" netns="/var/run/netns/cni-47c21888-2e5d-806e-2dbb-9d0af3ae576a" Oct 8 19:56:29.304917 containerd[1438]: 2024-10-08 19:56:29.273 [INFO][4226] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" iface="eth0" netns="/var/run/netns/cni-47c21888-2e5d-806e-2dbb-9d0af3ae576a" Oct 8 19:56:29.304917 containerd[1438]: 2024-10-08 19:56:29.273 [INFO][4226] k8s.go 615: Releasing IP address(es) ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Oct 8 19:56:29.304917 containerd[1438]: 2024-10-08 19:56:29.273 [INFO][4226] utils.go 188: Calico CNI releasing IP address ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Oct 8 19:56:29.304917 containerd[1438]: 2024-10-08 19:56:29.291 [INFO][4234] ipam_plugin.go 417: Releasing address using handleID ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" HandleID="k8s-pod-network.848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Workload="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:29.304917 containerd[1438]: 2024-10-08 19:56:29.291 [INFO][4234] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:29.304917 containerd[1438]: 2024-10-08 19:56:29.291 [INFO][4234] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:29.304917 containerd[1438]: 2024-10-08 19:56:29.300 [WARNING][4234] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" HandleID="k8s-pod-network.848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Workload="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:29.304917 containerd[1438]: 2024-10-08 19:56:29.300 [INFO][4234] ipam_plugin.go 445: Releasing address using workloadID ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" HandleID="k8s-pod-network.848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Workload="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:29.304917 containerd[1438]: 2024-10-08 19:56:29.301 [INFO][4234] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:29.304917 containerd[1438]: 2024-10-08 19:56:29.303 [INFO][4226] k8s.go 621: Teardown processing complete. ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Oct 8 19:56:29.304917 containerd[1438]: time="2024-10-08T19:56:29.304818831Z" level=info msg="TearDown network for sandbox \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\" successfully" Oct 8 19:56:29.304917 containerd[1438]: time="2024-10-08T19:56:29.304845591Z" level=info msg="StopPodSandbox for \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\" returns successfully" Oct 8 19:56:29.307434 systemd[1]: run-netns-cni\x2d47c21888\x2d2e5d\x2d806e\x2d2dbb\x2d9d0af3ae576a.mount: Deactivated successfully. Oct 8 19:56:29.311106 containerd[1438]: time="2024-10-08T19:56:29.311048632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcfcb6ccd-6v9n9,Uid:4228eb51-05d3-4c22-81c3-f7cee202ab67,Namespace:calico-system,Attempt:1,}" Oct 8 19:56:29.399526 kubelet[2511]: E1008 19:56:29.399453 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:29.449431 systemd-networkd[1371]: cali184752cf2b3: Link UP Oct 8 19:56:29.449631 systemd-networkd[1371]: cali184752cf2b3: Gained carrier Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.372 [INFO][4248] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0 calico-kube-controllers-7fcfcb6ccd- calico-system 4228eb51-05d3-4c22-81c3-f7cee202ab67 835 0 2024-10-08 19:56:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fcfcb6ccd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7fcfcb6ccd-6v9n9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali184752cf2b3 [] []}} ContainerID="3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" Namespace="calico-system" Pod="calico-kube-controllers-7fcfcb6ccd-6v9n9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-" Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.372 [INFO][4248] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" Namespace="calico-system" Pod="calico-kube-controllers-7fcfcb6ccd-6v9n9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.406 [INFO][4256] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" HandleID="k8s-pod-network.3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" Workload="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.417 [INFO][4256] ipam_plugin.go 270: Auto assigning IP ContainerID="3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" HandleID="k8s-pod-network.3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" Workload="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000321260), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7fcfcb6ccd-6v9n9", "timestamp":"2024-10-08 19:56:29.406921665 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.417 [INFO][4256] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.417 [INFO][4256] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.417 [INFO][4256] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.418 [INFO][4256] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" host="localhost" Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.422 [INFO][4256] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.426 [INFO][4256] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.427 [INFO][4256] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.429 [INFO][4256] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.429 [INFO][4256] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" host="localhost" Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.431 [INFO][4256] ipam.go 1685: Creating new handle: k8s-pod-network.3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213 Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.435 [INFO][4256] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" host="localhost" Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.441 [INFO][4256] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" host="localhost" Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.441 [INFO][4256] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" host="localhost" Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.441 [INFO][4256] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:29.460955 containerd[1438]: 2024-10-08 19:56:29.441 [INFO][4256] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" HandleID="k8s-pod-network.3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" Workload="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:29.461565 containerd[1438]: 2024-10-08 19:56:29.445 [INFO][4248] k8s.go 386: Populated endpoint ContainerID="3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" Namespace="calico-system" Pod="calico-kube-controllers-7fcfcb6ccd-6v9n9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0", GenerateName:"calico-kube-controllers-7fcfcb6ccd-", Namespace:"calico-system", SelfLink:"", UID:"4228eb51-05d3-4c22-81c3-f7cee202ab67", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcfcb6ccd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7fcfcb6ccd-6v9n9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali184752cf2b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:29.461565 containerd[1438]: 2024-10-08 19:56:29.445 [INFO][4248] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" Namespace="calico-system" Pod="calico-kube-controllers-7fcfcb6ccd-6v9n9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:29.461565 containerd[1438]: 2024-10-08 19:56:29.445 [INFO][4248] dataplane_linux.go 68: Setting the host side veth name to cali184752cf2b3 ContainerID="3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" Namespace="calico-system" Pod="calico-kube-controllers-7fcfcb6ccd-6v9n9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:29.461565 containerd[1438]: 2024-10-08 19:56:29.447 [INFO][4248] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" Namespace="calico-system" Pod="calico-kube-controllers-7fcfcb6ccd-6v9n9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:29.461565 containerd[1438]: 2024-10-08 19:56:29.447 [INFO][4248] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" Namespace="calico-system" Pod="calico-kube-controllers-7fcfcb6ccd-6v9n9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0", GenerateName:"calico-kube-controllers-7fcfcb6ccd-", Namespace:"calico-system", SelfLink:"", UID:"4228eb51-05d3-4c22-81c3-f7cee202ab67", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcfcb6ccd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213", Pod:"calico-kube-controllers-7fcfcb6ccd-6v9n9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali184752cf2b3", MAC:"ba:54:d5:d5:22:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:29.461565 containerd[1438]: 2024-10-08 19:56:29.457 [INFO][4248] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213" Namespace="calico-system" Pod="calico-kube-controllers-7fcfcb6ccd-6v9n9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:29.481867 containerd[1438]: time="2024-10-08T19:56:29.481660318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:29.481867 containerd[1438]: time="2024-10-08T19:56:29.481741158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:29.481867 containerd[1438]: time="2024-10-08T19:56:29.481762998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:29.481867 containerd[1438]: time="2024-10-08T19:56:29.481778639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:29.503522 systemd[1]: Started cri-containerd-3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213.scope - libcontainer container 3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213. Oct 8 19:56:29.514283 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:56:29.532036 containerd[1438]: time="2024-10-08T19:56:29.531982490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcfcb6ccd-6v9n9,Uid:4228eb51-05d3-4c22-81c3-f7cee202ab67,Namespace:calico-system,Attempt:1,} returns sandbox id \"3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213\"" Oct 8 19:56:29.534676 containerd[1438]: time="2024-10-08T19:56:29.533381859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:56:29.776470 systemd-networkd[1371]: cali9ff0080c38b: Gained IPv6LL Oct 8 19:56:30.222791 containerd[1438]: time="2024-10-08T19:56:30.222739577Z" level=info msg="StopPodSandbox for \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\"" Oct 8 19:56:30.309452 containerd[1438]: 2024-10-08 19:56:30.273 [INFO][4334] k8s.go 608: Cleaning up netns ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Oct 8 19:56:30.309452 containerd[1438]: 2024-10-08 19:56:30.273 [INFO][4334] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" iface="eth0" netns="/var/run/netns/cni-686d425f-6a3b-4ffc-5fa9-e8713b2489aa" Oct 8 19:56:30.309452 containerd[1438]: 2024-10-08 19:56:30.273 [INFO][4334] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" iface="eth0" netns="/var/run/netns/cni-686d425f-6a3b-4ffc-5fa9-e8713b2489aa" Oct 8 19:56:30.309452 containerd[1438]: 2024-10-08 19:56:30.273 [INFO][4334] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" iface="eth0" netns="/var/run/netns/cni-686d425f-6a3b-4ffc-5fa9-e8713b2489aa" Oct 8 19:56:30.309452 containerd[1438]: 2024-10-08 19:56:30.273 [INFO][4334] k8s.go 615: Releasing IP address(es) ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Oct 8 19:56:30.309452 containerd[1438]: 2024-10-08 19:56:30.273 [INFO][4334] utils.go 188: Calico CNI releasing IP address ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Oct 8 19:56:30.309452 containerd[1438]: 2024-10-08 19:56:30.297 [INFO][4342] ipam_plugin.go 417: Releasing address using handleID ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" HandleID="k8s-pod-network.1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Workload="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:30.309452 containerd[1438]: 2024-10-08 19:56:30.297 [INFO][4342] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:30.309452 containerd[1438]: 2024-10-08 19:56:30.297 [INFO][4342] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:30.309452 containerd[1438]: 2024-10-08 19:56:30.305 [WARNING][4342] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" HandleID="k8s-pod-network.1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Workload="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:30.309452 containerd[1438]: 2024-10-08 19:56:30.305 [INFO][4342] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" HandleID="k8s-pod-network.1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Workload="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:30.309452 containerd[1438]: 2024-10-08 19:56:30.306 [INFO][4342] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:30.309452 containerd[1438]: 2024-10-08 19:56:30.307 [INFO][4334] k8s.go 621: Teardown processing complete. ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Oct 8 19:56:30.312183 containerd[1438]: time="2024-10-08T19:56:30.311505271Z" level=info msg="TearDown network for sandbox \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\" successfully" Oct 8 19:56:30.312183 containerd[1438]: time="2024-10-08T19:56:30.311535351Z" level=info msg="StopPodSandbox for \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\" returns successfully" Oct 8 19:56:30.311686 systemd[1]: run-netns-cni\x2d686d425f\x2d6a3b\x2d4ffc\x2d5fa9\x2de8713b2489aa.mount: Deactivated successfully. Oct 8 19:56:30.312954 containerd[1438]: time="2024-10-08T19:56:30.312553877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8q6v8,Uid:b6bbb92e-10e0-4d4e-8c4d-e05b88c82846,Namespace:calico-system,Attempt:1,}" Oct 8 19:56:30.404391 kubelet[2511]: E1008 19:56:30.404305 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:30.450217 systemd-networkd[1371]: cali31e5bd49fc1: Link UP Oct 8 19:56:30.450801 systemd-networkd[1371]: cali31e5bd49fc1: Gained carrier Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.357 [INFO][4350] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8q6v8-eth0 csi-node-driver- calico-system b6bbb92e-10e0-4d4e-8c4d-e05b88c82846 847 0 2024-10-08 19:56:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-8q6v8 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali31e5bd49fc1 [] []}} ContainerID="5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" Namespace="calico-system" Pod="csi-node-driver-8q6v8" WorkloadEndpoint="localhost-k8s-csi--node--driver--8q6v8-" Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.357 [INFO][4350] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" Namespace="calico-system" Pod="csi-node-driver-8q6v8" WorkloadEndpoint="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.400 [INFO][4364] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" HandleID="k8s-pod-network.5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" Workload="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.412 [INFO][4364] ipam_plugin.go 270: Auto assigning IP ContainerID="5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" HandleID="k8s-pod-network.5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" Workload="localhost-k8s-csi--node--driver--8q6v8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000196a30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8q6v8", "timestamp":"2024-10-08 19:56:30.400542685 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.413 [INFO][4364] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.413 [INFO][4364] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.413 [INFO][4364] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.416 [INFO][4364] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" host="localhost" Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.420 [INFO][4364] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.424 [INFO][4364] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.426 [INFO][4364] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.429 [INFO][4364] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.429 [INFO][4364] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" host="localhost" Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.432 [INFO][4364] ipam.go 1685: Creating new handle: k8s-pod-network.5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3 Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.437 [INFO][4364] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" host="localhost" Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.445 [INFO][4364] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" host="localhost" Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.445 [INFO][4364] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" host="localhost" Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.445 [INFO][4364] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:30.465087 containerd[1438]: 2024-10-08 19:56:30.445 [INFO][4364] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" HandleID="k8s-pod-network.5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" Workload="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:30.470837 containerd[1438]: 2024-10-08 19:56:30.448 [INFO][4350] k8s.go 386: Populated endpoint ContainerID="5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" Namespace="calico-system" Pod="csi-node-driver-8q6v8" WorkloadEndpoint="localhost-k8s-csi--node--driver--8q6v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8q6v8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6bbb92e-10e0-4d4e-8c4d-e05b88c82846", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8q6v8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali31e5bd49fc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:30.470837 containerd[1438]: 2024-10-08 19:56:30.448 [INFO][4350] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" Namespace="calico-system" Pod="csi-node-driver-8q6v8" WorkloadEndpoint="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:30.470837 containerd[1438]: 2024-10-08 19:56:30.448 [INFO][4350] dataplane_linux.go 68: Setting the host side veth name to cali31e5bd49fc1 ContainerID="5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" Namespace="calico-system" Pod="csi-node-driver-8q6v8" WorkloadEndpoint="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:30.470837 containerd[1438]: 2024-10-08 19:56:30.450 [INFO][4350] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" Namespace="calico-system" Pod="csi-node-driver-8q6v8" WorkloadEndpoint="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:30.470837 containerd[1438]: 2024-10-08 19:56:30.451 [INFO][4350] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" Namespace="calico-system" Pod="csi-node-driver-8q6v8" WorkloadEndpoint="localhost-k8s-csi--node--driver--8q6v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8q6v8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6bbb92e-10e0-4d4e-8c4d-e05b88c82846", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3", Pod:"csi-node-driver-8q6v8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali31e5bd49fc1", MAC:"3e:4f:ad:92:13:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:30.470837 containerd[1438]: 2024-10-08 19:56:30.459 [INFO][4350] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3" Namespace="calico-system" Pod="csi-node-driver-8q6v8" WorkloadEndpoint="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:30.488768 containerd[1438]: time="2024-10-08T19:56:30.488230452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:30.488768 containerd[1438]: time="2024-10-08T19:56:30.488285492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:30.488768 containerd[1438]: time="2024-10-08T19:56:30.488304132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:30.488768 containerd[1438]: time="2024-10-08T19:56:30.488327332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:30.521518 systemd[1]: Started cri-containerd-5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3.scope - libcontainer container 5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3. Oct 8 19:56:30.532562 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:56:30.562067 containerd[1438]: time="2024-10-08T19:56:30.562030968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8q6v8,Uid:b6bbb92e-10e0-4d4e-8c4d-e05b88c82846,Namespace:calico-system,Attempt:1,} returns sandbox id \"5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3\"" Oct 8 19:56:31.016251 containerd[1438]: time="2024-10-08T19:56:31.016200659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:31.017071 containerd[1438]: time="2024-10-08T19:56:31.017027784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 8 19:56:31.017918 containerd[1438]: time="2024-10-08T19:56:31.017889589Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:31.020273 containerd[1438]: time="2024-10-08T19:56:31.020200724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:31.021277 containerd[1438]: time="2024-10-08T19:56:31.021176490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 1.486558503s" Oct 8 19:56:31.021277 containerd[1438]: time="2024-10-08T19:56:31.021217971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 8 19:56:31.022779 containerd[1438]: time="2024-10-08T19:56:31.021755254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:56:31.031820 containerd[1438]: time="2024-10-08T19:56:31.031772117Z" level=info msg="CreateContainer within sandbox \"3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:56:31.045704 containerd[1438]: time="2024-10-08T19:56:31.044964681Z" level=info msg="CreateContainer within sandbox \"3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7037b52c62ab022012431683363e21d96f96a0884d5e8e745c2a9a0d2bdc0207\"" Oct 8 19:56:31.045704 containerd[1438]: time="2024-10-08T19:56:31.045452684Z" level=info msg="StartContainer for \"7037b52c62ab022012431683363e21d96f96a0884d5e8e745c2a9a0d2bdc0207\"" Oct 8 19:56:31.082509 systemd[1]: Started cri-containerd-7037b52c62ab022012431683363e21d96f96a0884d5e8e745c2a9a0d2bdc0207.scope - libcontainer container 7037b52c62ab022012431683363e21d96f96a0884d5e8e745c2a9a0d2bdc0207. Oct 8 19:56:31.115818 containerd[1438]: time="2024-10-08T19:56:31.115773168Z" level=info msg="StartContainer for \"7037b52c62ab022012431683363e21d96f96a0884d5e8e745c2a9a0d2bdc0207\" returns successfully" Oct 8 19:56:31.224109 containerd[1438]: time="2024-10-08T19:56:31.224041773Z" level=info msg="StopPodSandbox for \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\"" Oct 8 19:56:31.312554 systemd-networkd[1371]: cali184752cf2b3: Gained IPv6LL Oct 8 19:56:31.337984 containerd[1438]: 2024-10-08 19:56:31.284 [INFO][4485] k8s.go 608: Cleaning up netns ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Oct 8 19:56:31.337984 containerd[1438]: 2024-10-08 19:56:31.284 [INFO][4485] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" iface="eth0" netns="/var/run/netns/cni-f462aec6-bbe0-6610-e3b7-6631a8a1a83c" Oct 8 19:56:31.337984 containerd[1438]: 2024-10-08 19:56:31.284 [INFO][4485] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" iface="eth0" netns="/var/run/netns/cni-f462aec6-bbe0-6610-e3b7-6631a8a1a83c" Oct 8 19:56:31.337984 containerd[1438]: 2024-10-08 19:56:31.285 [INFO][4485] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" iface="eth0" netns="/var/run/netns/cni-f462aec6-bbe0-6610-e3b7-6631a8a1a83c" Oct 8 19:56:31.337984 containerd[1438]: 2024-10-08 19:56:31.285 [INFO][4485] k8s.go 615: Releasing IP address(es) ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Oct 8 19:56:31.337984 containerd[1438]: 2024-10-08 19:56:31.285 [INFO][4485] utils.go 188: Calico CNI releasing IP address ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Oct 8 19:56:31.337984 containerd[1438]: 2024-10-08 19:56:31.307 [INFO][4493] ipam_plugin.go 417: Releasing address using handleID ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" HandleID="k8s-pod-network.98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Workload="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:31.337984 containerd[1438]: 2024-10-08 19:56:31.307 [INFO][4493] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:31.337984 containerd[1438]: 2024-10-08 19:56:31.307 [INFO][4493] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:31.337984 containerd[1438]: 2024-10-08 19:56:31.331 [WARNING][4493] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" HandleID="k8s-pod-network.98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Workload="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:31.337984 containerd[1438]: 2024-10-08 19:56:31.331 [INFO][4493] ipam_plugin.go 445: Releasing address using workloadID ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" HandleID="k8s-pod-network.98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Workload="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:31.337984 containerd[1438]: 2024-10-08 19:56:31.333 [INFO][4493] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:31.337984 containerd[1438]: 2024-10-08 19:56:31.334 [INFO][4485] k8s.go 621: Teardown processing complete. ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Oct 8 19:56:31.338635 containerd[1438]: time="2024-10-08T19:56:31.338405216Z" level=info msg="TearDown network for sandbox \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\" successfully" Oct 8 19:56:31.338635 containerd[1438]: time="2024-10-08T19:56:31.338434016Z" level=info msg="StopPodSandbox for \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\" returns successfully" Oct 8 19:56:31.338835 kubelet[2511]: E1008 19:56:31.338796 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:31.340664 containerd[1438]: time="2024-10-08T19:56:31.340242268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nrrhj,Uid:f2c83810-f6a6-4339-9963-d6170097f801,Namespace:kube-system,Attempt:1,}" Oct 8 19:56:31.432851 kubelet[2511]: I1008 19:56:31.432811 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7fcfcb6ccd-6v9n9" podStartSLOduration=23.944266337 podStartE2EDuration="25.432768893s" podCreationTimestamp="2024-10-08 19:56:06 +0000 UTC" firstStartedPulling="2024-10-08 19:56:29.533037697 +0000 UTC m=+44.412860366" lastFinishedPulling="2024-10-08 19:56:31.021540253 +0000 UTC m=+45.901362922" observedRunningTime="2024-10-08 19:56:31.432643052 +0000 UTC m=+46.312465721" watchObservedRunningTime="2024-10-08 19:56:31.432768893 +0000 UTC m=+46.312591562" Oct 8 19:56:31.538751 systemd-networkd[1371]: cali44730874cbe: Link UP Oct 8 19:56:31.539176 systemd-networkd[1371]: cali44730874cbe: Gained carrier Oct 8 19:56:31.554484 systemd[1]: run-netns-cni\x2df462aec6\x2dbbe0\x2d6610\x2de3b7\x2d6631a8a1a83c.mount: Deactivated successfully. Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.402 [INFO][4500] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--nrrhj-eth0 coredns-76f75df574- kube-system f2c83810-f6a6-4339-9963-d6170097f801 863 0 2024-10-08 19:56:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-nrrhj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali44730874cbe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" Namespace="kube-system" Pod="coredns-76f75df574-nrrhj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--nrrhj-" Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.402 [INFO][4500] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" Namespace="kube-system" Pod="coredns-76f75df574-nrrhj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.472 [INFO][4513] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" HandleID="k8s-pod-network.ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" Workload="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.488 [INFO][4513] ipam_plugin.go 270: Auto assigning IP ContainerID="ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" HandleID="k8s-pod-network.ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" Workload="localhost-k8s-coredns--76f75df574--nrrhj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003067d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-nrrhj", "timestamp":"2024-10-08 19:56:31.472810666 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.489 [INFO][4513] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.489 [INFO][4513] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.490 [INFO][4513] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.493 [INFO][4513] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" host="localhost" Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.499 [INFO][4513] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.505 [INFO][4513] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.509 [INFO][4513] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.514 [INFO][4513] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.514 [INFO][4513] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" host="localhost" Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.517 [INFO][4513] ipam.go 1685: Creating new handle: k8s-pod-network.ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615 Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.521 [INFO][4513] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" host="localhost" Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.529 [INFO][4513] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" host="localhost" Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.529 [INFO][4513] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" host="localhost" Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.529 [INFO][4513] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:31.562060 containerd[1438]: 2024-10-08 19:56:31.529 [INFO][4513] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" HandleID="k8s-pod-network.ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" Workload="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:31.562775 containerd[1438]: 2024-10-08 19:56:31.534 [INFO][4500] k8s.go 386: Populated endpoint ContainerID="ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" Namespace="kube-system" Pod="coredns-76f75df574-nrrhj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--nrrhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--nrrhj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f2c83810-f6a6-4339-9963-d6170097f801", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-nrrhj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44730874cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:31.562775 containerd[1438]: 2024-10-08 19:56:31.534 [INFO][4500] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" Namespace="kube-system" Pod="coredns-76f75df574-nrrhj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:31.562775 containerd[1438]: 2024-10-08 19:56:31.534 [INFO][4500] dataplane_linux.go 68: Setting the host side veth name to cali44730874cbe ContainerID="ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" Namespace="kube-system" Pod="coredns-76f75df574-nrrhj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:31.562775 containerd[1438]: 2024-10-08 19:56:31.536 [INFO][4500] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" Namespace="kube-system" Pod="coredns-76f75df574-nrrhj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:31.562775 containerd[1438]: 2024-10-08 19:56:31.541 [INFO][4500] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" Namespace="kube-system" Pod="coredns-76f75df574-nrrhj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--nrrhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--nrrhj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f2c83810-f6a6-4339-9963-d6170097f801", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615", Pod:"coredns-76f75df574-nrrhj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44730874cbe", MAC:"6a:d1:5a:6b:0e:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:31.562775 containerd[1438]: 2024-10-08 19:56:31.558 [INFO][4500] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615" Namespace="kube-system" Pod="coredns-76f75df574-nrrhj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:31.586885 containerd[1438]: time="2024-10-08T19:56:31.586568585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:56:31.586885 containerd[1438]: time="2024-10-08T19:56:31.586643905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:31.586885 containerd[1438]: time="2024-10-08T19:56:31.586666106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:56:31.586885 containerd[1438]: time="2024-10-08T19:56:31.586677186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:56:31.617128 systemd[1]: Started cri-containerd-ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615.scope - libcontainer container ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615. Oct 8 19:56:31.631806 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:56:31.654684 containerd[1438]: time="2024-10-08T19:56:31.654643655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nrrhj,Uid:f2c83810-f6a6-4339-9963-d6170097f801,Namespace:kube-system,Attempt:1,} returns sandbox id \"ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615\"" Oct 8 19:56:31.655617 kubelet[2511]: E1008 19:56:31.655397 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:31.660834 containerd[1438]: time="2024-10-08T19:56:31.660770774Z" level=info msg="CreateContainer within sandbox \"ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:56:31.691850 containerd[1438]: time="2024-10-08T19:56:31.691724690Z" level=info msg="CreateContainer within sandbox \"ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3cbce86e5a2bbd5a121287e4824c8ce62e031c3ef125e06c3ee2711b22522012\"" Oct 8 19:56:31.692588 containerd[1438]: time="2024-10-08T19:56:31.692553695Z" level=info msg="StartContainer for \"3cbce86e5a2bbd5a121287e4824c8ce62e031c3ef125e06c3ee2711b22522012\"" Oct 8 19:56:31.723526 systemd[1]: Started cri-containerd-3cbce86e5a2bbd5a121287e4824c8ce62e031c3ef125e06c3ee2711b22522012.scope - libcontainer container 3cbce86e5a2bbd5a121287e4824c8ce62e031c3ef125e06c3ee2711b22522012. Oct 8 19:56:31.748837 containerd[1438]: time="2024-10-08T19:56:31.748705610Z" level=info msg="StartContainer for \"3cbce86e5a2bbd5a121287e4824c8ce62e031c3ef125e06c3ee2711b22522012\" returns successfully" Oct 8 19:56:32.071591 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:60390.service - OpenSSH per-connection server daemon (10.0.0.1:60390). Oct 8 19:56:32.120053 containerd[1438]: time="2024-10-08T19:56:32.119994623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:32.121207 containerd[1438]: time="2024-10-08T19:56:32.121173230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 8 19:56:32.122278 sshd[4642]: Accepted publickey for core from 10.0.0.1 port 60390 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:56:32.124149 sshd[4642]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:56:32.124682 containerd[1438]: time="2024-10-08T19:56:32.124372410Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:32.126927 containerd[1438]: time="2024-10-08T19:56:32.126886865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:32.127429 containerd[1438]: time="2024-10-08T19:56:32.127401269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.105600255s" Oct 8 19:56:32.127474 containerd[1438]: time="2024-10-08T19:56:32.127436509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 8 19:56:32.130325 systemd-logind[1419]: New session 13 of user core. Oct 8 19:56:32.130962 containerd[1438]: time="2024-10-08T19:56:32.130913770Z" level=info msg="CreateContainer within sandbox \"5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:56:32.139548 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:56:32.179713 containerd[1438]: time="2024-10-08T19:56:32.179506951Z" level=info msg="CreateContainer within sandbox \"5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6e2c567cf3be943ef60efaefc215a423d70ce0b3fc5b097c79c0e0388684715e\"" Oct 8 19:56:32.180908 containerd[1438]: time="2024-10-08T19:56:32.180406037Z" level=info msg="StartContainer for \"6e2c567cf3be943ef60efaefc215a423d70ce0b3fc5b097c79c0e0388684715e\"" Oct 8 19:56:32.219509 systemd[1]: Started cri-containerd-6e2c567cf3be943ef60efaefc215a423d70ce0b3fc5b097c79c0e0388684715e.scope - libcontainer container 6e2c567cf3be943ef60efaefc215a423d70ce0b3fc5b097c79c0e0388684715e. Oct 8 19:56:32.272461 systemd-networkd[1371]: cali31e5bd49fc1: Gained IPv6LL Oct 8 19:56:32.275007 containerd[1438]: time="2024-10-08T19:56:32.274964423Z" level=info msg="StartContainer for \"6e2c567cf3be943ef60efaefc215a423d70ce0b3fc5b097c79c0e0388684715e\" returns successfully" Oct 8 19:56:32.277748 containerd[1438]: time="2024-10-08T19:56:32.276011990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:56:32.392595 sshd[4642]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:32.403958 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:60390.service: Deactivated successfully. Oct 8 19:56:32.405552 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:56:32.406113 systemd-logind[1419]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:56:32.407902 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:60402.service - OpenSSH per-connection server daemon (10.0.0.1:60402). Oct 8 19:56:32.408655 systemd-logind[1419]: Removed session 13. Oct 8 19:56:32.424759 kubelet[2511]: E1008 19:56:32.424725 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:32.448220 sshd[4698]: Accepted publickey for core from 10.0.0.1 port 60402 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:56:32.451495 sshd[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:56:32.452299 kubelet[2511]: I1008 19:56:32.451840 2511 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-nrrhj" podStartSLOduration=32.451782119 podStartE2EDuration="32.451782119s" podCreationTimestamp="2024-10-08 19:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:56:32.437001867 +0000 UTC m=+47.316824536" watchObservedRunningTime="2024-10-08 19:56:32.451782119 +0000 UTC m=+47.331604748" Oct 8 19:56:32.463912 systemd-logind[1419]: New session 14 of user core. Oct 8 19:56:32.473560 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:56:32.592940 systemd-networkd[1371]: cali44730874cbe: Gained IPv6LL Oct 8 19:56:32.764954 sshd[4698]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:32.771786 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:60402.service: Deactivated successfully. Oct 8 19:56:32.776239 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:56:32.777453 systemd-logind[1419]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:56:32.782020 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:59690.service - OpenSSH per-connection server daemon (10.0.0.1:59690). Oct 8 19:56:32.783345 systemd-logind[1419]: Removed session 14. Oct 8 19:56:32.840619 sshd[4716]: Accepted publickey for core from 10.0.0.1 port 59690 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:56:32.842203 sshd[4716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:56:32.847473 systemd-logind[1419]: New session 15 of user core. Oct 8 19:56:32.858493 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:56:33.258224 containerd[1438]: time="2024-10-08T19:56:33.258170525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:33.259217 containerd[1438]: time="2024-10-08T19:56:33.258972690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 8 19:56:33.260028 containerd[1438]: time="2024-10-08T19:56:33.259951536Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:33.262547 containerd[1438]: time="2024-10-08T19:56:33.262495592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:56:33.263121 containerd[1438]: time="2024-10-08T19:56:33.263019715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 986.973245ms" Oct 8 19:56:33.263121 containerd[1438]: time="2024-10-08T19:56:33.263053835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 8 19:56:33.264668 containerd[1438]: time="2024-10-08T19:56:33.264633325Z" level=info msg="CreateContainer within sandbox \"5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:56:33.279918 containerd[1438]: time="2024-10-08T19:56:33.279827417Z" level=info msg="CreateContainer within sandbox \"5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c05ff5782fa9f6ece700a640bfb040d69d0f0de2901b730abac8b83ae61fcb79\"" Oct 8 19:56:33.282049 containerd[1438]: time="2024-10-08T19:56:33.281023744Z" level=info msg="StartContainer for \"c05ff5782fa9f6ece700a640bfb040d69d0f0de2901b730abac8b83ae61fcb79\"" Oct 8 19:56:33.315700 systemd[1]: Started cri-containerd-c05ff5782fa9f6ece700a640bfb040d69d0f0de2901b730abac8b83ae61fcb79.scope - libcontainer container c05ff5782fa9f6ece700a640bfb040d69d0f0de2901b730abac8b83ae61fcb79. Oct 8 19:56:33.342567 containerd[1438]: time="2024-10-08T19:56:33.342513438Z" level=info msg="StartContainer for \"c05ff5782fa9f6ece700a640bfb040d69d0f0de2901b730abac8b83ae61fcb79\" returns successfully" Oct 8 19:56:33.434181 kubelet[2511]: E1008 19:56:33.432677 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:33.551805 systemd[1]: run-containerd-runc-k8s.io-c05ff5782fa9f6ece700a640bfb040d69d0f0de2901b730abac8b83ae61fcb79-runc.5207vE.mount: Deactivated successfully. Oct 8 19:56:34.313786 kubelet[2511]: I1008 19:56:34.313714 2511 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:56:34.319644 kubelet[2511]: I1008 19:56:34.319547 2511 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:56:34.407669 sshd[4716]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:34.418943 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:59690.service: Deactivated successfully. Oct 8 19:56:34.420975 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:56:34.423246 systemd-logind[1419]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:56:34.430972 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:59694.service - OpenSSH per-connection server daemon (10.0.0.1:59694). Oct 8 19:56:34.433589 systemd-logind[1419]: Removed session 15. Oct 8 19:56:34.434386 kubelet[2511]: E1008 19:56:34.434362 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:56:34.467634 sshd[4781]: Accepted publickey for core from 10.0.0.1 port 59694 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:56:34.468965 sshd[4781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:56:34.474383 systemd-logind[1419]: New session 16 of user core. Oct 8 19:56:34.487496 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:56:34.811234 sshd[4781]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:34.819868 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:59694.service: Deactivated successfully. Oct 8 19:56:34.821914 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:56:34.824283 systemd-logind[1419]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:56:34.830744 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:59704.service - OpenSSH per-connection server daemon (10.0.0.1:59704). Oct 8 19:56:34.832079 systemd-logind[1419]: Removed session 16. Oct 8 19:56:34.863582 sshd[4794]: Accepted publickey for core from 10.0.0.1 port 59704 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:56:34.865027 sshd[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:56:34.869217 systemd-logind[1419]: New session 17 of user core. Oct 8 19:56:34.877461 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:56:35.006850 sshd[4794]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:35.009934 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:59704.service: Deactivated successfully. Oct 8 19:56:35.011826 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:56:35.013975 systemd-logind[1419]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:56:35.014768 systemd-logind[1419]: Removed session 17. Oct 8 19:56:40.018394 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:59706.service - OpenSSH per-connection server daemon (10.0.0.1:59706). Oct 8 19:56:40.061556 sshd[4825]: Accepted publickey for core from 10.0.0.1 port 59706 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:56:40.062853 sshd[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:56:40.068052 systemd-logind[1419]: New session 18 of user core. Oct 8 19:56:40.079522 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:56:40.256226 sshd[4825]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:40.260336 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:59706.service: Deactivated successfully. Oct 8 19:56:40.262257 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:56:40.263224 systemd-logind[1419]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:56:40.264854 systemd-logind[1419]: Removed session 18. Oct 8 19:56:45.193884 containerd[1438]: time="2024-10-08T19:56:45.193801079Z" level=info msg="StopPodSandbox for \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\"" Oct 8 19:56:45.263671 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:41488.service - OpenSSH per-connection server daemon (10.0.0.1:41488). Oct 8 19:56:45.287467 containerd[1438]: 2024-10-08 19:56:45.244 [WARNING][4859] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--nrrhj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f2c83810-f6a6-4339-9963-d6170097f801", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615", Pod:"coredns-76f75df574-nrrhj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44730874cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:45.287467 containerd[1438]: 2024-10-08 19:56:45.244 [INFO][4859] k8s.go 608: Cleaning up netns ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Oct 8 19:56:45.287467 containerd[1438]: 2024-10-08 19:56:45.244 [INFO][4859] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" iface="eth0" netns="" Oct 8 19:56:45.287467 containerd[1438]: 2024-10-08 19:56:45.244 [INFO][4859] k8s.go 615: Releasing IP address(es) ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Oct 8 19:56:45.287467 containerd[1438]: 2024-10-08 19:56:45.244 [INFO][4859] utils.go 188: Calico CNI releasing IP address ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Oct 8 19:56:45.287467 containerd[1438]: 2024-10-08 19:56:45.271 [INFO][4879] ipam_plugin.go 417: Releasing address using handleID ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" HandleID="k8s-pod-network.98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Workload="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:45.287467 containerd[1438]: 2024-10-08 19:56:45.271 [INFO][4879] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:45.287467 containerd[1438]: 2024-10-08 19:56:45.271 [INFO][4879] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:45.287467 containerd[1438]: 2024-10-08 19:56:45.281 [WARNING][4879] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" HandleID="k8s-pod-network.98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Workload="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:45.287467 containerd[1438]: 2024-10-08 19:56:45.281 [INFO][4879] ipam_plugin.go 445: Releasing address using workloadID ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" HandleID="k8s-pod-network.98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Workload="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:45.287467 containerd[1438]: 2024-10-08 19:56:45.283 [INFO][4879] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:45.287467 containerd[1438]: 2024-10-08 19:56:45.284 [INFO][4859] k8s.go 621: Teardown processing complete. ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Oct 8 19:56:45.287863 containerd[1438]: time="2024-10-08T19:56:45.287510759Z" level=info msg="TearDown network for sandbox \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\" successfully" Oct 8 19:56:45.287863 containerd[1438]: time="2024-10-08T19:56:45.287535879Z" level=info msg="StopPodSandbox for \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\" returns successfully" Oct 8 19:56:45.289029 containerd[1438]: time="2024-10-08T19:56:45.288125802Z" level=info msg="RemovePodSandbox for \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\"" Oct 8 19:56:45.292183 containerd[1438]: time="2024-10-08T19:56:45.288161723Z" level=info msg="Forcibly stopping sandbox \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\"" Oct 8 19:56:45.314763 sshd[4891]: Accepted publickey for core from 10.0.0.1 port 41488 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:56:45.316378 sshd[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:56:45.321896 systemd-logind[1419]: New session 19 of user core. Oct 8 19:56:45.327569 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:56:45.370252 containerd[1438]: 2024-10-08 19:56:45.332 [WARNING][4914] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--nrrhj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f2c83810-f6a6-4339-9963-d6170097f801", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff2d68df18bd81ee2104564f1bc44c9546826af47f31f93cc3dcfe6150975615", Pod:"coredns-76f75df574-nrrhj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44730874cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:45.370252 containerd[1438]: 2024-10-08 19:56:45.332 [INFO][4914] k8s.go 608: Cleaning up netns ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Oct 8 19:56:45.370252 containerd[1438]: 2024-10-08 19:56:45.332 [INFO][4914] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" iface="eth0" netns="" Oct 8 19:56:45.370252 containerd[1438]: 2024-10-08 19:56:45.332 [INFO][4914] k8s.go 615: Releasing IP address(es) ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Oct 8 19:56:45.370252 containerd[1438]: 2024-10-08 19:56:45.332 [INFO][4914] utils.go 188: Calico CNI releasing IP address ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Oct 8 19:56:45.370252 containerd[1438]: 2024-10-08 19:56:45.353 [INFO][4921] ipam_plugin.go 417: Releasing address using handleID ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" HandleID="k8s-pod-network.98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Workload="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:45.370252 containerd[1438]: 2024-10-08 19:56:45.353 [INFO][4921] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:45.370252 containerd[1438]: 2024-10-08 19:56:45.353 [INFO][4921] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:45.370252 containerd[1438]: 2024-10-08 19:56:45.361 [WARNING][4921] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" HandleID="k8s-pod-network.98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Workload="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:45.370252 containerd[1438]: 2024-10-08 19:56:45.361 [INFO][4921] ipam_plugin.go 445: Releasing address using workloadID ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" HandleID="k8s-pod-network.98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Workload="localhost-k8s-coredns--76f75df574--nrrhj-eth0" Oct 8 19:56:45.370252 containerd[1438]: 2024-10-08 19:56:45.364 [INFO][4921] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:45.370252 containerd[1438]: 2024-10-08 19:56:45.368 [INFO][4914] k8s.go 621: Teardown processing complete. ContainerID="98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d" Oct 8 19:56:45.370644 containerd[1438]: time="2024-10-08T19:56:45.370291303Z" level=info msg="TearDown network for sandbox \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\" successfully" Oct 8 19:56:45.382156 containerd[1438]: time="2024-10-08T19:56:45.381550841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:56:45.382156 containerd[1438]: time="2024-10-08T19:56:45.381674642Z" level=info msg="RemovePodSandbox \"98927bcd818ef1c390ad2e99ce1471c0f53424f6527662ecaa440bb979c9905d\" returns successfully" Oct 8 19:56:45.383037 containerd[1438]: time="2024-10-08T19:56:45.383007289Z" level=info msg="StopPodSandbox for \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\"" Oct 8 19:56:45.498379 containerd[1438]: 2024-10-08 19:56:45.430 [WARNING][4952] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0", GenerateName:"calico-kube-controllers-7fcfcb6ccd-", Namespace:"calico-system", SelfLink:"", UID:"4228eb51-05d3-4c22-81c3-f7cee202ab67", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcfcb6ccd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213", Pod:"calico-kube-controllers-7fcfcb6ccd-6v9n9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali184752cf2b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:45.498379 containerd[1438]: 2024-10-08 19:56:45.430 [INFO][4952] k8s.go 608: Cleaning up netns ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Oct 8 19:56:45.498379 containerd[1438]: 2024-10-08 19:56:45.430 [INFO][4952] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" iface="eth0" netns="" Oct 8 19:56:45.498379 containerd[1438]: 2024-10-08 19:56:45.430 [INFO][4952] k8s.go 615: Releasing IP address(es) ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Oct 8 19:56:45.498379 containerd[1438]: 2024-10-08 19:56:45.430 [INFO][4952] utils.go 188: Calico CNI releasing IP address ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Oct 8 19:56:45.498379 containerd[1438]: 2024-10-08 19:56:45.478 [INFO][4960] ipam_plugin.go 417: Releasing address using handleID ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" HandleID="k8s-pod-network.848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Workload="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:45.498379 containerd[1438]: 2024-10-08 19:56:45.478 [INFO][4960] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:45.498379 containerd[1438]: 2024-10-08 19:56:45.478 [INFO][4960] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:45.498379 containerd[1438]: 2024-10-08 19:56:45.489 [WARNING][4960] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" HandleID="k8s-pod-network.848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Workload="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:45.498379 containerd[1438]: 2024-10-08 19:56:45.489 [INFO][4960] ipam_plugin.go 445: Releasing address using workloadID ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" HandleID="k8s-pod-network.848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Workload="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:45.498379 containerd[1438]: 2024-10-08 19:56:45.490 [INFO][4960] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:45.498379 containerd[1438]: 2024-10-08 19:56:45.494 [INFO][4952] k8s.go 621: Teardown processing complete. ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Oct 8 19:56:45.498379 containerd[1438]: time="2024-10-08T19:56:45.497964798Z" level=info msg="TearDown network for sandbox \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\" successfully" Oct 8 19:56:45.498379 containerd[1438]: time="2024-10-08T19:56:45.497992638Z" level=info msg="StopPodSandbox for \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\" returns successfully" Oct 8 19:56:45.499494 containerd[1438]: time="2024-10-08T19:56:45.498998443Z" level=info msg="RemovePodSandbox for \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\"" Oct 8 19:56:45.499494 containerd[1438]: time="2024-10-08T19:56:45.499040163Z" level=info msg="Forcibly stopping sandbox \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\"" Oct 8 19:56:45.500741 sshd[4891]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:45.503884 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:41488.service: Deactivated successfully. Oct 8 19:56:45.507261 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:56:45.509659 systemd-logind[1419]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:56:45.510725 systemd-logind[1419]: Removed session 19. Oct 8 19:56:45.588116 containerd[1438]: 2024-10-08 19:56:45.545 [WARNING][4987] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0", GenerateName:"calico-kube-controllers-7fcfcb6ccd-", Namespace:"calico-system", SelfLink:"", UID:"4228eb51-05d3-4c22-81c3-f7cee202ab67", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcfcb6ccd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3883b404764e829be9bd62c4911aef191f766b62200cd9c6a933b7bda33a1213", Pod:"calico-kube-controllers-7fcfcb6ccd-6v9n9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali184752cf2b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:45.588116 containerd[1438]: 2024-10-08 19:56:45.546 [INFO][4987] k8s.go 608: Cleaning up netns ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Oct 8 19:56:45.588116 containerd[1438]: 2024-10-08 19:56:45.546 [INFO][4987] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" iface="eth0" netns="" Oct 8 19:56:45.588116 containerd[1438]: 2024-10-08 19:56:45.546 [INFO][4987] k8s.go 615: Releasing IP address(es) ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Oct 8 19:56:45.588116 containerd[1438]: 2024-10-08 19:56:45.546 [INFO][4987] utils.go 188: Calico CNI releasing IP address ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Oct 8 19:56:45.588116 containerd[1438]: 2024-10-08 19:56:45.568 [INFO][4994] ipam_plugin.go 417: Releasing address using handleID ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" HandleID="k8s-pod-network.848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Workload="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:45.588116 containerd[1438]: 2024-10-08 19:56:45.568 [INFO][4994] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:45.588116 containerd[1438]: 2024-10-08 19:56:45.568 [INFO][4994] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:45.588116 containerd[1438]: 2024-10-08 19:56:45.578 [WARNING][4994] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" HandleID="k8s-pod-network.848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Workload="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:45.588116 containerd[1438]: 2024-10-08 19:56:45.578 [INFO][4994] ipam_plugin.go 445: Releasing address using workloadID ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" HandleID="k8s-pod-network.848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Workload="localhost-k8s-calico--kube--controllers--7fcfcb6ccd--6v9n9-eth0" Oct 8 19:56:45.588116 containerd[1438]: 2024-10-08 19:56:45.581 [INFO][4994] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:45.588116 containerd[1438]: 2024-10-08 19:56:45.585 [INFO][4987] k8s.go 621: Teardown processing complete. ContainerID="848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc" Oct 8 19:56:45.588544 containerd[1438]: time="2024-10-08T19:56:45.588210620Z" level=info msg="TearDown network for sandbox \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\" successfully" Oct 8 19:56:45.594145 containerd[1438]: time="2024-10-08T19:56:45.594089930Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:56:45.594266 containerd[1438]: time="2024-10-08T19:56:45.594211051Z" level=info msg="RemovePodSandbox \"848f32530ed643f9ca140853da384287b1fc923e96cd4114c2998c1b98bfb2bc\" returns successfully" Oct 8 19:56:45.594850 containerd[1438]: time="2024-10-08T19:56:45.594797254Z" level=info msg="StopPodSandbox for \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\"" Oct 8 19:56:45.693161 containerd[1438]: 2024-10-08 19:56:45.640 [WARNING][5017] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bsxtp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9", Pod:"coredns-76f75df574-bsxtp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ff0080c38b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:45.693161 containerd[1438]: 2024-10-08 19:56:45.641 [INFO][5017] k8s.go 608: Cleaning up netns ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Oct 8 19:56:45.693161 containerd[1438]: 2024-10-08 19:56:45.641 [INFO][5017] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" iface="eth0" netns="" Oct 8 19:56:45.693161 containerd[1438]: 2024-10-08 19:56:45.641 [INFO][5017] k8s.go 615: Releasing IP address(es) ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Oct 8 19:56:45.693161 containerd[1438]: 2024-10-08 19:56:45.641 [INFO][5017] utils.go 188: Calico CNI releasing IP address ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Oct 8 19:56:45.693161 containerd[1438]: 2024-10-08 19:56:45.673 [INFO][5024] ipam_plugin.go 417: Releasing address using handleID ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" HandleID="k8s-pod-network.c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Workload="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:45.693161 containerd[1438]: 2024-10-08 19:56:45.673 [INFO][5024] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:45.693161 containerd[1438]: 2024-10-08 19:56:45.673 [INFO][5024] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:45.693161 containerd[1438]: 2024-10-08 19:56:45.684 [WARNING][5024] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" HandleID="k8s-pod-network.c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Workload="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:45.693161 containerd[1438]: 2024-10-08 19:56:45.684 [INFO][5024] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" HandleID="k8s-pod-network.c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Workload="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:45.693161 containerd[1438]: 2024-10-08 19:56:45.687 [INFO][5024] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:45.693161 containerd[1438]: 2024-10-08 19:56:45.690 [INFO][5017] k8s.go 621: Teardown processing complete. ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Oct 8 19:56:45.694406 containerd[1438]: time="2024-10-08T19:56:45.693189518Z" level=info msg="TearDown network for sandbox \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\" successfully" Oct 8 19:56:45.694406 containerd[1438]: time="2024-10-08T19:56:45.693225718Z" level=info msg="StopPodSandbox for \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\" returns successfully" Oct 8 19:56:45.694406 containerd[1438]: time="2024-10-08T19:56:45.693744001Z" level=info msg="RemovePodSandbox for \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\"" Oct 8 19:56:45.694406 containerd[1438]: time="2024-10-08T19:56:45.693775881Z" level=info msg="Forcibly stopping sandbox \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\"" Oct 8 19:56:45.769496 containerd[1438]: 2024-10-08 19:56:45.736 [WARNING][5046] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bsxtp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7a056a3c-fdd6-4ec5-aafb-275c20cfa1fd", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d20d992cc5f8a0629386d4c89c51f8882f21daafac3ae276378d0cad71de80e9", Pod:"coredns-76f75df574-bsxtp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ff0080c38b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:45.769496 containerd[1438]: 2024-10-08 19:56:45.736 [INFO][5046] k8s.go 608: Cleaning up netns ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Oct 8 19:56:45.769496 containerd[1438]: 2024-10-08 19:56:45.736 [INFO][5046] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" iface="eth0" netns="" Oct 8 19:56:45.769496 containerd[1438]: 2024-10-08 19:56:45.736 [INFO][5046] k8s.go 615: Releasing IP address(es) ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Oct 8 19:56:45.769496 containerd[1438]: 2024-10-08 19:56:45.736 [INFO][5046] utils.go 188: Calico CNI releasing IP address ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Oct 8 19:56:45.769496 containerd[1438]: 2024-10-08 19:56:45.755 [INFO][5053] ipam_plugin.go 417: Releasing address using handleID ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" HandleID="k8s-pod-network.c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Workload="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:45.769496 containerd[1438]: 2024-10-08 19:56:45.755 [INFO][5053] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:45.769496 containerd[1438]: 2024-10-08 19:56:45.755 [INFO][5053] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:45.769496 containerd[1438]: 2024-10-08 19:56:45.763 [WARNING][5053] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" HandleID="k8s-pod-network.c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Workload="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:45.769496 containerd[1438]: 2024-10-08 19:56:45.763 [INFO][5053] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" HandleID="k8s-pod-network.c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Workload="localhost-k8s-coredns--76f75df574--bsxtp-eth0" Oct 8 19:56:45.769496 containerd[1438]: 2024-10-08 19:56:45.765 [INFO][5053] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:45.769496 containerd[1438]: 2024-10-08 19:56:45.766 [INFO][5046] k8s.go 621: Teardown processing complete. ContainerID="c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190" Oct 8 19:56:45.769496 containerd[1438]: time="2024-10-08T19:56:45.768697465Z" level=info msg="TearDown network for sandbox \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\" successfully" Oct 8 19:56:45.771586 containerd[1438]: time="2024-10-08T19:56:45.771536440Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:56:45.771666 containerd[1438]: time="2024-10-08T19:56:45.771609560Z" level=info msg="RemovePodSandbox \"c6d74782141bed5fdee36ac6d21198a105a135151a45e19f98828e53b4de7190\" returns successfully" Oct 8 19:56:45.772146 containerd[1438]: time="2024-10-08T19:56:45.772116643Z" level=info msg="StopPodSandbox for \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\"" Oct 8 19:56:45.839160 containerd[1438]: 2024-10-08 19:56:45.806 [WARNING][5076] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8q6v8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6bbb92e-10e0-4d4e-8c4d-e05b88c82846", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3", Pod:"csi-node-driver-8q6v8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali31e5bd49fc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:45.839160 containerd[1438]: 2024-10-08 19:56:45.807 [INFO][5076] k8s.go 608: Cleaning up netns ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Oct 8 19:56:45.839160 containerd[1438]: 2024-10-08 19:56:45.807 [INFO][5076] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" iface="eth0" netns="" Oct 8 19:56:45.839160 containerd[1438]: 2024-10-08 19:56:45.807 [INFO][5076] k8s.go 615: Releasing IP address(es) ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Oct 8 19:56:45.839160 containerd[1438]: 2024-10-08 19:56:45.807 [INFO][5076] utils.go 188: Calico CNI releasing IP address ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Oct 8 19:56:45.839160 containerd[1438]: 2024-10-08 19:56:45.825 [INFO][5083] ipam_plugin.go 417: Releasing address using handleID ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" HandleID="k8s-pod-network.1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Workload="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:45.839160 containerd[1438]: 2024-10-08 19:56:45.825 [INFO][5083] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:45.839160 containerd[1438]: 2024-10-08 19:56:45.825 [INFO][5083] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:45.839160 containerd[1438]: 2024-10-08 19:56:45.834 [WARNING][5083] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" HandleID="k8s-pod-network.1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Workload="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:45.839160 containerd[1438]: 2024-10-08 19:56:45.834 [INFO][5083] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" HandleID="k8s-pod-network.1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Workload="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:45.839160 containerd[1438]: 2024-10-08 19:56:45.835 [INFO][5083] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:45.839160 containerd[1438]: 2024-10-08 19:56:45.837 [INFO][5076] k8s.go 621: Teardown processing complete. ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Oct 8 19:56:45.839570 containerd[1438]: time="2024-10-08T19:56:45.839200506Z" level=info msg="TearDown network for sandbox \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\" successfully" Oct 8 19:56:45.839570 containerd[1438]: time="2024-10-08T19:56:45.839227346Z" level=info msg="StopPodSandbox for \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\" returns successfully" Oct 8 19:56:45.839870 containerd[1438]: time="2024-10-08T19:56:45.839775829Z" level=info msg="RemovePodSandbox for \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\"" Oct 8 19:56:45.839969 containerd[1438]: time="2024-10-08T19:56:45.839881630Z" level=info msg="Forcibly stopping sandbox \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\"" Oct 8 19:56:45.912081 containerd[1438]: 2024-10-08 19:56:45.877 [WARNING][5106] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8q6v8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6bbb92e-10e0-4d4e-8c4d-e05b88c82846", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a394c8777c86ea48de71b4a32cab3ea37229174459a25045af6d1a65c11b6c3", Pod:"csi-node-driver-8q6v8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali31e5bd49fc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:56:45.912081 containerd[1438]: 2024-10-08 19:56:45.877 [INFO][5106] k8s.go 608: Cleaning up netns ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Oct 8 19:56:45.912081 containerd[1438]: 2024-10-08 19:56:45.877 [INFO][5106] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" iface="eth0" netns="" Oct 8 19:56:45.912081 containerd[1438]: 2024-10-08 19:56:45.877 [INFO][5106] k8s.go 615: Releasing IP address(es) ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Oct 8 19:56:45.912081 containerd[1438]: 2024-10-08 19:56:45.877 [INFO][5106] utils.go 188: Calico CNI releasing IP address ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Oct 8 19:56:45.912081 containerd[1438]: 2024-10-08 19:56:45.895 [INFO][5114] ipam_plugin.go 417: Releasing address using handleID ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" HandleID="k8s-pod-network.1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Workload="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:45.912081 containerd[1438]: 2024-10-08 19:56:45.895 [INFO][5114] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:56:45.912081 containerd[1438]: 2024-10-08 19:56:45.896 [INFO][5114] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:56:45.912081 containerd[1438]: 2024-10-08 19:56:45.907 [WARNING][5114] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" HandleID="k8s-pod-network.1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Workload="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:45.912081 containerd[1438]: 2024-10-08 19:56:45.907 [INFO][5114] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" HandleID="k8s-pod-network.1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Workload="localhost-k8s-csi--node--driver--8q6v8-eth0" Oct 8 19:56:45.912081 containerd[1438]: 2024-10-08 19:56:45.908 [INFO][5114] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:56:45.912081 containerd[1438]: 2024-10-08 19:56:45.910 [INFO][5106] k8s.go 621: Teardown processing complete. ContainerID="1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070" Oct 8 19:56:45.912497 containerd[1438]: time="2024-10-08T19:56:45.912127240Z" level=info msg="TearDown network for sandbox \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\" successfully" Oct 8 19:56:45.914825 containerd[1438]: time="2024-10-08T19:56:45.914791414Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:56:45.914884 containerd[1438]: time="2024-10-08T19:56:45.914861014Z" level=info msg="RemovePodSandbox \"1c27fd887c5f6fb67a75db43d3340906df068536c59c54cc7b40747a02d76070\" returns successfully" Oct 8 19:56:50.511987 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:41502.service - OpenSSH per-connection server daemon (10.0.0.1:41502). Oct 8 19:56:50.551734 sshd[5134]: Accepted publickey for core from 10.0.0.1 port 41502 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:56:50.553035 sshd[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:56:50.556568 systemd-logind[1419]: New session 20 of user core. Oct 8 19:56:50.572498 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:56:50.712560 sshd[5134]: pam_unix(sshd:session): session closed for user core Oct 8 19:56:50.715100 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:56:50.716354 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:41502.service: Deactivated successfully. Oct 8 19:56:50.718270 systemd-logind[1419]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:56:50.719176 systemd-logind[1419]: Removed session 20.