Oct 8 20:03:27.881042 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 8 20:03:27.881062 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Oct 8 18:25:39 -00 2024 Oct 8 20:03:27.881072 kernel: KASLR enabled Oct 8 20:03:27.881077 kernel: efi: EFI v2.7 by EDK II Oct 8 20:03:27.881083 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 8 20:03:27.881089 kernel: random: crng init done Oct 8 20:03:27.881095 kernel: ACPI: Early table checksum verification disabled Oct 8 20:03:27.881101 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 8 20:03:27.881107 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 8 20:03:27.881115 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:03:27.881121 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:03:27.881127 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:03:27.881133 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:03:27.881139 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:03:27.881146 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:03:27.881154 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:03:27.881161 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:03:27.881167 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:03:27.881173 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 8 20:03:27.881180 kernel: NUMA: Failed to initialise from firmware Oct 8 20:03:27.881186 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 20:03:27.881192 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Oct 8 20:03:27.881199 kernel: Zone ranges: Oct 8 20:03:27.881219 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 20:03:27.881225 kernel: DMA32 empty Oct 8 20:03:27.881233 kernel: Normal empty Oct 8 20:03:27.881239 kernel: Movable zone start for each node Oct 8 20:03:27.881245 kernel: Early memory node ranges Oct 8 20:03:27.881252 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 8 20:03:27.881258 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 8 20:03:27.881265 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 8 20:03:27.881271 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 8 20:03:27.881277 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 8 20:03:27.881284 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 8 20:03:27.881290 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 8 20:03:27.881296 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 20:03:27.881302 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 8 20:03:27.881310 kernel: psci: probing for conduit method from ACPI. Oct 8 20:03:27.881316 kernel: psci: PSCIv1.1 detected in firmware. Oct 8 20:03:27.881323 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 20:03:27.881332 kernel: psci: Trusted OS migration not required Oct 8 20:03:27.881338 kernel: psci: SMC Calling Convention v1.1 Oct 8 20:03:27.881345 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 8 20:03:27.881353 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 20:03:27.881360 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 20:03:27.881367 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 8 20:03:27.881374 kernel: Detected PIPT I-cache on CPU0 Oct 8 20:03:27.881380 kernel: CPU features: detected: GIC system register CPU interface Oct 8 20:03:27.881387 kernel: CPU features: detected: Hardware dirty bit management Oct 8 20:03:27.881394 kernel: CPU features: detected: Spectre-v4 Oct 8 20:03:27.881400 kernel: CPU features: detected: Spectre-BHB Oct 8 20:03:27.881407 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 8 20:03:27.881414 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 8 20:03:27.881422 kernel: CPU features: detected: ARM erratum 1418040 Oct 8 20:03:27.881429 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 8 20:03:27.881435 kernel: alternatives: applying boot alternatives Oct 8 20:03:27.881443 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f7968382bc5b46f9b6104a9f012cfba991c8ea306771e716a099618547de81d3 Oct 8 20:03:27.881450 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:03:27.881457 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 20:03:27.881464 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 20:03:27.881471 kernel: Fallback order for Node 0: 0 Oct 8 20:03:27.881477 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 8 20:03:27.881484 kernel: Policy zone: DMA Oct 8 20:03:27.881490 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:03:27.881498 kernel: software IO TLB: area num 4. Oct 8 20:03:27.881505 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 8 20:03:27.881512 kernel: Memory: 2386468K/2572288K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39360K init, 897K bss, 185820K reserved, 0K cma-reserved) Oct 8 20:03:27.881519 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 20:03:27.881525 kernel: trace event string verifier disabled Oct 8 20:03:27.881532 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:03:27.881539 kernel: rcu: RCU event tracing is enabled. Oct 8 20:03:27.881546 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 20:03:27.881553 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:03:27.881560 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:03:27.881567 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:03:27.881573 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 20:03:27.881581 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 20:03:27.881588 kernel: GICv3: 256 SPIs implemented Oct 8 20:03:27.881595 kernel: GICv3: 0 Extended SPIs implemented Oct 8 20:03:27.881601 kernel: Root IRQ handler: gic_handle_irq Oct 8 20:03:27.881608 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 8 20:03:27.881615 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 8 20:03:27.881621 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 8 20:03:27.881628 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 20:03:27.881635 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 8 20:03:27.881642 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 8 20:03:27.881648 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 8 20:03:27.881656 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:03:27.881663 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 20:03:27.881670 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 8 20:03:27.881677 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 8 20:03:27.881683 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 8 20:03:27.881690 kernel: arm-pv: using stolen time PV Oct 8 20:03:27.881697 kernel: Console: colour dummy device 80x25 Oct 8 20:03:27.881704 kernel: ACPI: Core revision 20230628 Oct 8 20:03:27.881711 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 8 20:03:27.881718 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:03:27.881726 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 20:03:27.881733 kernel: landlock: Up and running. Oct 8 20:03:27.881740 kernel: SELinux: Initializing. Oct 8 20:03:27.881747 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 20:03:27.881754 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 20:03:27.881761 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:03:27.881768 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:03:27.881774 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:03:27.881781 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:03:27.881790 kernel: Platform MSI: ITS@0x8080000 domain created Oct 8 20:03:27.881796 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 8 20:03:27.881803 kernel: Remapping and enabling EFI services. Oct 8 20:03:27.881810 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:03:27.881817 kernel: Detected PIPT I-cache on CPU1 Oct 8 20:03:27.881824 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 8 20:03:27.881837 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 8 20:03:27.881845 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 20:03:27.881851 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 8 20:03:27.881858 kernel: Detected PIPT I-cache on CPU2 Oct 8 20:03:27.881867 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 8 20:03:27.881874 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 8 20:03:27.881885 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 20:03:27.881894 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 8 20:03:27.881901 kernel: Detected PIPT I-cache on CPU3 Oct 8 20:03:27.881909 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 8 20:03:27.881916 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 8 20:03:27.881923 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 20:03:27.881930 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 8 20:03:27.881939 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 20:03:27.881960 kernel: SMP: Total of 4 processors activated. Oct 8 20:03:27.881969 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 20:03:27.881976 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 8 20:03:27.881984 kernel: CPU features: detected: Common not Private translations Oct 8 20:03:27.881991 kernel: CPU features: detected: CRC32 instructions Oct 8 20:03:27.881998 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 8 20:03:27.882005 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 8 20:03:27.882014 kernel: CPU features: detected: LSE atomic instructions Oct 8 20:03:27.882022 kernel: CPU features: detected: Privileged Access Never Oct 8 20:03:27.882029 kernel: CPU features: detected: RAS Extension Support Oct 8 20:03:27.882036 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 8 20:03:27.882043 kernel: CPU: All CPU(s) started at EL1 Oct 8 20:03:27.882051 kernel: alternatives: applying system-wide alternatives Oct 8 20:03:27.882058 kernel: devtmpfs: initialized Oct 8 20:03:27.882065 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:03:27.882073 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 20:03:27.882082 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:03:27.882089 kernel: SMBIOS 3.0.0 present. Oct 8 20:03:27.882096 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 8 20:03:27.882104 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:03:27.882111 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 20:03:27.882118 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 20:03:27.882126 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 20:03:27.882133 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:03:27.882140 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Oct 8 20:03:27.882149 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:03:27.882156 kernel: cpuidle: using governor menu Oct 8 20:03:27.882164 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 20:03:27.882171 kernel: ASID allocator initialised with 32768 entries Oct 8 20:03:27.882178 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:03:27.882186 kernel: Serial: AMBA PL011 UART driver Oct 8 20:03:27.882193 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 8 20:03:27.882200 kernel: Modules: 0 pages in range for non-PLT usage Oct 8 20:03:27.882282 kernel: Modules: 509024 pages in range for PLT usage Oct 8 20:03:27.882291 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 20:03:27.882299 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 20:03:27.882306 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 20:03:27.882313 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 20:03:27.882321 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:03:27.882328 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:03:27.882335 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 20:03:27.882342 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 20:03:27.882350 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:03:27.882358 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:03:27.882365 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:03:27.882373 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:03:27.882380 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 20:03:27.882388 kernel: ACPI: Interpreter enabled Oct 8 20:03:27.882395 kernel: ACPI: Using GIC for interrupt routing Oct 8 20:03:27.882402 kernel: ACPI: MCFG table detected, 1 entries Oct 8 20:03:27.882409 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 8 20:03:27.882417 kernel: printk: console [ttyAMA0] enabled Oct 8 20:03:27.882425 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 20:03:27.882559 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 20:03:27.882633 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 20:03:27.882715 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 20:03:27.882781 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 8 20:03:27.882855 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 8 20:03:27.882866 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 8 20:03:27.882877 kernel: PCI host bridge to bus 0000:00 Oct 8 20:03:27.882948 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 8 20:03:27.883008 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 20:03:27.883067 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 8 20:03:27.883125 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 20:03:27.883220 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 8 20:03:27.883300 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 20:03:27.883371 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 8 20:03:27.883438 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 8 20:03:27.883510 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 20:03:27.883577 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 20:03:27.883645 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 8 20:03:27.883713 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 8 20:03:27.883773 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 8 20:03:27.883842 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 20:03:27.883903 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 8 20:03:27.883913 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 20:03:27.883920 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 20:03:27.883928 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 20:03:27.883935 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 20:03:27.883943 kernel: iommu: Default domain type: Translated Oct 8 20:03:27.883950 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 20:03:27.883959 kernel: efivars: Registered efivars operations Oct 8 20:03:27.883967 kernel: vgaarb: loaded Oct 8 20:03:27.883974 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 20:03:27.883981 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:03:27.883989 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:03:27.883996 kernel: pnp: PnP ACPI init Oct 8 20:03:27.884082 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 8 20:03:27.884093 kernel: pnp: PnP ACPI: found 1 devices Oct 8 20:03:27.884102 kernel: NET: Registered PF_INET protocol family Oct 8 20:03:27.884110 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 20:03:27.884117 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 20:03:27.884124 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:03:27.884132 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 20:03:27.884139 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 20:03:27.884147 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 20:03:27.884154 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 20:03:27.884161 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 20:03:27.884170 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:03:27.884177 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:03:27.884185 kernel: kvm [1]: HYP mode not available Oct 8 20:03:27.884192 kernel: Initialise system trusted keyrings Oct 8 20:03:27.884200 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 20:03:27.884216 kernel: Key type asymmetric registered Oct 8 20:03:27.884223 kernel: Asymmetric key parser 'x509' registered Oct 8 20:03:27.884230 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 20:03:27.884238 kernel: io scheduler mq-deadline registered Oct 8 20:03:27.884247 kernel: io scheduler kyber registered Oct 8 20:03:27.884254 kernel: io scheduler bfq registered Oct 8 20:03:27.884261 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 20:03:27.884269 kernel: ACPI: button: Power Button [PWRB] Oct 8 20:03:27.884276 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 20:03:27.884350 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 8 20:03:27.884360 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:03:27.884367 kernel: thunder_xcv, ver 1.0 Oct 8 20:03:27.884374 kernel: thunder_bgx, ver 1.0 Oct 8 20:03:27.884383 kernel: nicpf, ver 1.0 Oct 8 20:03:27.884391 kernel: nicvf, ver 1.0 Oct 8 20:03:27.884465 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 20:03:27.884529 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T20:03:27 UTC (1728417807) Oct 8 20:03:27.884539 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 20:03:27.884546 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 8 20:03:27.884554 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 20:03:27.884561 kernel: watchdog: Hard watchdog permanently disabled Oct 8 20:03:27.884571 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:03:27.884578 kernel: Segment Routing with IPv6 Oct 8 20:03:27.884586 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:03:27.884593 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:03:27.884600 kernel: Key type dns_resolver registered Oct 8 20:03:27.884607 kernel: registered taskstats version 1 Oct 8 20:03:27.884615 kernel: Loading compiled-in X.509 certificates Oct 8 20:03:27.884622 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e9e638352c282bfddf5aec6da700ad8191939d05' Oct 8 20:03:27.884629 kernel: Key type .fscrypt registered Oct 8 20:03:27.884638 kernel: Key type fscrypt-provisioning registered Oct 8 20:03:27.884645 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 20:03:27.884653 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:03:27.884660 kernel: ima: No architecture policies found Oct 8 20:03:27.884667 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 20:03:27.884675 kernel: clk: Disabling unused clocks Oct 8 20:03:27.884682 kernel: Freeing unused kernel memory: 39360K Oct 8 20:03:27.884689 kernel: Run /init as init process Oct 8 20:03:27.884696 kernel: with arguments: Oct 8 20:03:27.884705 kernel: /init Oct 8 20:03:27.884712 kernel: with environment: Oct 8 20:03:27.884719 kernel: HOME=/ Oct 8 20:03:27.884726 kernel: TERM=linux Oct 8 20:03:27.884733 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:03:27.884742 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:03:27.884752 systemd[1]: Detected virtualization kvm. Oct 8 20:03:27.884759 systemd[1]: Detected architecture arm64. Oct 8 20:03:27.884769 systemd[1]: Running in initrd. Oct 8 20:03:27.884776 systemd[1]: No hostname configured, using default hostname. Oct 8 20:03:27.884784 systemd[1]: Hostname set to . Oct 8 20:03:27.884792 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:03:27.884799 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:03:27.884807 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:03:27.884815 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:03:27.884823 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:03:27.884840 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:03:27.884848 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:03:27.884856 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:03:27.884865 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:03:27.884874 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:03:27.884882 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:03:27.884889 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:03:27.884899 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:03:27.884907 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:03:27.884915 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:03:27.884923 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:03:27.884931 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:03:27.884938 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:03:27.884946 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:03:27.884954 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:03:27.884963 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:03:27.884971 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:03:27.884979 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:03:27.884987 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:03:27.884995 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:03:27.885003 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:03:27.885010 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:03:27.885018 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:03:27.885026 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:03:27.885035 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:03:27.885043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:03:27.885051 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:03:27.885058 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:03:27.885066 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:03:27.885074 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:03:27.885084 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:03:27.885108 systemd-journald[238]: Collecting audit messages is disabled. Oct 8 20:03:27.885128 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:03:27.885137 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:03:27.885146 systemd-journald[238]: Journal started Oct 8 20:03:27.885164 systemd-journald[238]: Runtime Journal (/run/log/journal/d33163d6fe5f4af09ab235edfcc2c8ad) is 5.9M, max 47.3M, 41.4M free. Oct 8 20:03:27.876054 systemd-modules-load[239]: Inserted module 'overlay' Oct 8 20:03:27.890135 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:03:27.890171 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:03:27.892030 systemd-modules-load[239]: Inserted module 'br_netfilter' Oct 8 20:03:27.892728 kernel: Bridge firewalling registered Oct 8 20:03:27.892669 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:03:27.894082 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:03:27.896874 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:03:27.899118 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:03:27.904352 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:03:27.907280 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:03:27.909407 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:03:27.910488 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:03:27.920421 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:03:27.922286 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:03:27.931677 dracut-cmdline[273]: dracut-dracut-053 Oct 8 20:03:27.934045 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f7968382bc5b46f9b6104a9f012cfba991c8ea306771e716a099618547de81d3 Oct 8 20:03:27.948979 systemd-resolved[279]: Positive Trust Anchors: Oct 8 20:03:27.948998 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:03:27.949030 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:03:27.953774 systemd-resolved[279]: Defaulting to hostname 'linux'. Oct 8 20:03:27.955211 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:03:27.956075 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:03:28.004240 kernel: SCSI subsystem initialized Oct 8 20:03:28.008224 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:03:28.020219 kernel: iscsi: registered transport (tcp) Oct 8 20:03:28.030479 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:03:28.030513 kernel: QLogic iSCSI HBA Driver Oct 8 20:03:28.073235 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:03:28.081386 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:03:28.098317 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:03:28.098375 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:03:28.099239 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:03:28.146243 kernel: raid6: neonx8 gen() 15769 MB/s Oct 8 20:03:28.163224 kernel: raid6: neonx4 gen() 15202 MB/s Oct 8 20:03:28.180219 kernel: raid6: neonx2 gen() 13173 MB/s Oct 8 20:03:28.197216 kernel: raid6: neonx1 gen() 10457 MB/s Oct 8 20:03:28.214224 kernel: raid6: int64x8 gen() 6959 MB/s Oct 8 20:03:28.231215 kernel: raid6: int64x4 gen() 7354 MB/s Oct 8 20:03:28.248216 kernel: raid6: int64x2 gen() 6131 MB/s Oct 8 20:03:28.265216 kernel: raid6: int64x1 gen() 5055 MB/s Oct 8 20:03:28.265239 kernel: raid6: using algorithm neonx8 gen() 15769 MB/s Oct 8 20:03:28.282225 kernel: raid6: .... xor() 11926 MB/s, rmw enabled Oct 8 20:03:28.282239 kernel: raid6: using neon recovery algorithm Oct 8 20:03:28.287245 kernel: xor: measuring software checksum speed Oct 8 20:03:28.287259 kernel: 8regs : 19317 MB/sec Oct 8 20:03:28.288307 kernel: 32regs : 19660 MB/sec Oct 8 20:03:28.288322 kernel: arm64_neon : 26096 MB/sec Oct 8 20:03:28.288332 kernel: xor: using function: arm64_neon (26096 MB/sec) Oct 8 20:03:28.338224 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:03:28.351249 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:03:28.365371 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:03:28.377554 systemd-udevd[461]: Using default interface naming scheme 'v255'. Oct 8 20:03:28.380764 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:03:28.391385 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:03:28.402775 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Oct 8 20:03:28.429775 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:03:28.440393 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:03:28.482402 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:03:28.489374 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:03:28.500957 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:03:28.504717 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:03:28.505903 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:03:28.507663 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:03:28.516390 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:03:28.527300 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:03:28.541896 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 8 20:03:28.542162 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 20:03:28.542250 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:03:28.542368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:03:28.548092 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 20:03:28.548112 kernel: GPT:9289727 != 19775487 Oct 8 20:03:28.548122 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 20:03:28.548131 kernel: GPT:9289727 != 19775487 Oct 8 20:03:28.548142 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 20:03:28.548153 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:03:28.548159 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:03:28.549421 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:03:28.549557 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:03:28.551722 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:03:28.561221 kernel: BTRFS: device fsid ad786f33-c7c5-429e-95f9-4ea457bd3916 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (514) Oct 8 20:03:28.561257 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (512) Oct 8 20:03:28.561504 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:03:28.573233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:03:28.581123 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 20:03:28.585504 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 20:03:28.589697 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 20:03:28.593331 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 20:03:28.594184 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 20:03:28.609728 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:03:28.611334 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:03:28.620297 disk-uuid[550]: Primary Header is updated. Oct 8 20:03:28.620297 disk-uuid[550]: Secondary Entries is updated. Oct 8 20:03:28.620297 disk-uuid[550]: Secondary Header is updated. Oct 8 20:03:28.626338 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:03:28.637142 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:03:28.640225 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:03:29.645221 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 20:03:29.645644 disk-uuid[552]: The operation has completed successfully. Oct 8 20:03:29.665970 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:03:29.666066 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:03:29.688359 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:03:29.691221 sh[575]: Success Oct 8 20:03:29.706239 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 20:03:29.733155 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:03:29.745476 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:03:29.746909 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:03:29.756966 kernel: BTRFS info (device dm-0): first mount of filesystem ad786f33-c7c5-429e-95f9-4ea457bd3916 Oct 8 20:03:29.757003 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:03:29.757013 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:03:29.757024 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:03:29.758292 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:03:29.761338 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:03:29.762637 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:03:29.774402 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:03:29.776133 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:03:29.782762 kernel: BTRFS info (device vda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:03:29.782803 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:03:29.782813 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:03:29.785239 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:03:29.792131 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 20:03:29.793563 kernel: BTRFS info (device vda6): last unmount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:03:29.860364 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:03:29.876446 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:03:29.883653 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:03:29.887337 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:03:29.897126 systemd-networkd[754]: lo: Link UP Oct 8 20:03:29.897136 systemd-networkd[754]: lo: Gained carrier Oct 8 20:03:29.897815 systemd-networkd[754]: Enumeration completed Oct 8 20:03:29.897911 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:03:29.899366 systemd[1]: Reached target network.target - Network. Oct 8 20:03:29.900875 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:03:29.900878 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:03:29.901986 systemd-networkd[754]: eth0: Link UP Oct 8 20:03:29.901990 systemd-networkd[754]: eth0: Gained carrier Oct 8 20:03:29.901997 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:03:29.923279 systemd-networkd[754]: eth0: DHCPv4 address 10.0.0.154/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:03:29.982656 ignition[757]: Ignition 2.19.0 Oct 8 20:03:29.982665 ignition[757]: Stage: fetch-offline Oct 8 20:03:29.982705 ignition[757]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:29.982714 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:03:29.982867 ignition[757]: parsed url from cmdline: "" Oct 8 20:03:29.982870 ignition[757]: no config URL provided Oct 8 20:03:29.982875 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:03:29.982882 ignition[757]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:03:29.982905 ignition[757]: op(1): [started] loading QEMU firmware config module Oct 8 20:03:29.982910 ignition[757]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 20:03:29.994612 ignition[757]: op(1): [finished] loading QEMU firmware config module Oct 8 20:03:29.994645 ignition[757]: QEMU firmware config was not found. Ignoring... Oct 8 20:03:30.031485 ignition[757]: parsing config with SHA512: be975b8fcf444d06edd8da181257e8aa63a0732aa248f4832e66791600c02e479e5ff468a17b9553ac43a7a2534db815ef645a5435d28c4d7e219aa9ba6eb067 Oct 8 20:03:30.038222 unknown[757]: fetched base config from "system" Oct 8 20:03:30.038233 unknown[757]: fetched user config from "qemu" Oct 8 20:03:30.038715 ignition[757]: fetch-offline: fetch-offline passed Oct 8 20:03:30.038778 ignition[757]: Ignition finished successfully Oct 8 20:03:30.040517 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:03:30.042233 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 20:03:30.048352 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:03:30.058471 ignition[771]: Ignition 2.19.0 Oct 8 20:03:30.058481 ignition[771]: Stage: kargs Oct 8 20:03:30.058640 ignition[771]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:30.058650 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:03:30.059538 ignition[771]: kargs: kargs passed Oct 8 20:03:30.059583 ignition[771]: Ignition finished successfully Oct 8 20:03:30.061800 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:03:30.063783 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:03:30.077569 ignition[779]: Ignition 2.19.0 Oct 8 20:03:30.077580 ignition[779]: Stage: disks Oct 8 20:03:30.077769 ignition[779]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:30.077779 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:03:30.078772 ignition[779]: disks: disks passed Oct 8 20:03:30.078820 ignition[779]: Ignition finished successfully Oct 8 20:03:30.080965 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:03:30.082451 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:03:30.084492 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:03:30.085371 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:03:30.087080 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:03:30.088646 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:03:30.098616 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:03:30.109268 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 20:03:30.113853 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:03:30.126305 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:03:30.181151 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:03:30.182379 kernel: EXT4-fs (vda9): mounted filesystem 833c86f3-93dd-4526-bb43-c7809dac8e51 r/w with ordered data mode. Quota mode: none. Oct 8 20:03:30.182296 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:03:30.189314 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:03:30.190729 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:03:30.191877 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 20:03:30.191928 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:03:30.191993 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:03:30.197935 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:03:30.201389 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Oct 8 20:03:30.201411 kernel: BTRFS info (device vda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:03:30.201422 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:03:30.201432 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:03:30.200981 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:03:30.204218 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:03:30.205995 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:03:30.250855 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:03:30.254433 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:03:30.257564 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:03:30.260517 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:03:30.330518 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:03:30.338353 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:03:30.339668 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:03:30.344230 kernel: BTRFS info (device vda6): last unmount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:03:30.360309 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:03:30.362775 ignition[911]: INFO : Ignition 2.19.0 Oct 8 20:03:30.362775 ignition[911]: INFO : Stage: mount Oct 8 20:03:30.364021 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:30.364021 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:03:30.364021 ignition[911]: INFO : mount: mount passed Oct 8 20:03:30.364021 ignition[911]: INFO : Ignition finished successfully Oct 8 20:03:30.365264 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:03:30.380312 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:03:30.755723 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:03:30.765411 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:03:30.775231 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924) Oct 8 20:03:30.776882 kernel: BTRFS info (device vda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 20:03:30.776902 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:03:30.776913 kernel: BTRFS info (device vda6): using free space tree Oct 8 20:03:30.782232 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 20:03:30.782895 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:03:30.804390 ignition[941]: INFO : Ignition 2.19.0 Oct 8 20:03:30.804390 ignition[941]: INFO : Stage: files Oct 8 20:03:30.805621 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:30.805621 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:03:30.805621 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:03:30.808071 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:03:30.808071 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:03:30.810797 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:03:30.811814 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:03:30.811814 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:03:30.811303 unknown[941]: wrote ssh authorized keys file for user: core Oct 8 20:03:30.814570 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 8 20:03:30.814570 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 8 20:03:30.814570 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 20:03:30.814570 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 20:03:30.874105 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 8 20:03:31.016980 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 20:03:31.016980 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 20:03:31.020377 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 8 20:03:31.106473 systemd-networkd[754]: eth0: Gained IPv6LL Oct 8 20:03:31.331386 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 8 20:03:31.391239 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 20:03:31.393123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:03:31.393123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:03:31.393123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:03:31.393123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:03:31.393123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:03:31.393123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:03:31.393123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:03:31.393123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:03:31.393123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:03:31.393123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:03:31.393123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 20:03:31.393123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 20:03:31.393123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 20:03:31.393123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Oct 8 20:03:31.567935 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Oct 8 20:03:31.848633 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 20:03:31.848633 ignition[941]: INFO : files: op(d): [started] processing unit "containerd.service" Oct 8 20:03:31.851522 ignition[941]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 8 20:03:31.851522 ignition[941]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 8 20:03:31.851522 ignition[941]: INFO : files: op(d): [finished] processing unit "containerd.service" Oct 8 20:03:31.851522 ignition[941]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Oct 8 20:03:31.851522 ignition[941]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:03:31.851522 ignition[941]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:03:31.851522 ignition[941]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Oct 8 20:03:31.851522 ignition[941]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Oct 8 20:03:31.851522 ignition[941]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 20:03:31.851522 ignition[941]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 20:03:31.851522 ignition[941]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Oct 8 20:03:31.851522 ignition[941]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 20:03:31.880505 ignition[941]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 20:03:31.884554 ignition[941]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 20:03:31.887190 ignition[941]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 20:03:31.887190 ignition[941]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:03:31.887190 ignition[941]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:03:31.887190 ignition[941]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:03:31.887190 ignition[941]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:03:31.887190 ignition[941]: INFO : files: files passed Oct 8 20:03:31.887190 ignition[941]: INFO : Ignition finished successfully Oct 8 20:03:31.887560 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:03:31.897424 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:03:31.900413 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:03:31.906533 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:03:31.906639 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:03:31.911612 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 20:03:31.916165 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:03:31.916165 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:03:31.919529 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:03:31.922344 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:03:31.923912 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:03:31.934426 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:03:31.955677 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:03:31.955847 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:03:31.958219 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:03:31.960155 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:03:31.961983 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:03:31.962767 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:03:31.981010 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:03:31.983587 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:03:31.996561 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:03:31.997843 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:03:31.999850 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:03:32.001593 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:03:32.001719 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:03:32.004242 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:03:32.006265 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:03:32.007937 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:03:32.009661 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:03:32.011573 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:03:32.013531 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:03:32.015332 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:03:32.017312 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:03:32.019250 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:03:32.020951 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:03:32.022448 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:03:32.022580 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:03:32.024924 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:03:32.026882 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:03:32.028833 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:03:32.032258 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:03:32.033492 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:03:32.033612 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:03:32.036440 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:03:32.036552 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:03:32.038482 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:03:32.040059 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:03:32.043256 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:03:32.044517 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:03:32.046261 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:03:32.047709 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:03:32.047803 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:03:32.049158 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:03:32.049251 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:03:32.050614 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:03:32.050712 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:03:32.052352 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:03:32.052454 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:03:32.065554 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:03:32.066433 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:03:32.066561 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:03:32.069000 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:03:32.070433 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:03:32.070551 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:03:32.072200 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:03:32.072352 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:03:32.078433 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:03:32.079332 ignition[997]: INFO : Ignition 2.19.0 Oct 8 20:03:32.079332 ignition[997]: INFO : Stage: umount Oct 8 20:03:32.079332 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:32.079332 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 20:03:32.084292 ignition[997]: INFO : umount: umount passed Oct 8 20:03:32.084292 ignition[997]: INFO : Ignition finished successfully Oct 8 20:03:32.080249 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:03:32.083758 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:03:32.083842 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:03:32.085689 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:03:32.086566 systemd[1]: Stopped target network.target - Network. Oct 8 20:03:32.090164 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:03:32.090246 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:03:32.091451 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:03:32.091487 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:03:32.092836 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:03:32.092878 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:03:32.094078 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:03:32.094116 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:03:32.095622 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:03:32.096850 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:03:32.104262 systemd-networkd[754]: eth0: DHCPv6 lease lost Oct 8 20:03:32.105496 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:03:32.105626 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:03:32.108313 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:03:32.108425 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:03:32.110508 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:03:32.110571 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:03:32.119319 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:03:32.119977 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:03:32.120030 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:03:32.121449 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:03:32.121487 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:03:32.122792 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:03:32.122839 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:03:32.124371 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:03:32.124409 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:03:32.125931 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:03:32.134285 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:03:32.134392 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:03:32.139149 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:03:32.139305 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:03:32.140994 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:03:32.141030 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:03:32.143011 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:03:32.143049 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:03:32.143938 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:03:32.143985 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:03:32.147042 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:03:32.147085 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:03:32.150871 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:03:32.150917 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:03:32.161374 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:03:32.162111 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:03:32.162163 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:03:32.163740 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:03:32.163779 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:03:32.165433 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:03:32.165511 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:03:32.166873 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:03:32.166962 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:03:32.169692 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:03:32.171066 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:03:32.171156 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:03:32.173559 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:03:32.182157 systemd[1]: Switching root. Oct 8 20:03:32.206791 systemd-journald[238]: Journal stopped Oct 8 20:03:32.966853 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Oct 8 20:03:32.966908 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 20:03:32.966920 kernel: SELinux: policy capability open_perms=1 Oct 8 20:03:32.966930 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 20:03:32.966940 kernel: SELinux: policy capability always_check_network=0 Oct 8 20:03:32.966949 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 20:03:32.966962 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 20:03:32.966972 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 20:03:32.966981 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 20:03:32.966991 kernel: audit: type=1403 audit(1728417812.425:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 20:03:32.967001 systemd[1]: Successfully loaded SELinux policy in 34.155ms. Oct 8 20:03:32.967022 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.449ms. Oct 8 20:03:32.967038 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:03:32.967051 systemd[1]: Detected virtualization kvm. Oct 8 20:03:32.967062 systemd[1]: Detected architecture arm64. Oct 8 20:03:32.967074 systemd[1]: Detected first boot. Oct 8 20:03:32.967084 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:03:32.967095 zram_generator::config[1063]: No configuration found. Oct 8 20:03:32.967106 systemd[1]: Populated /etc with preset unit settings. Oct 8 20:03:32.967117 systemd[1]: Queued start job for default target multi-user.target. Oct 8 20:03:32.967127 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 20:03:32.967138 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 20:03:32.967149 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 20:03:32.967161 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 20:03:32.967172 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 20:03:32.967183 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 20:03:32.967194 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 20:03:32.967227 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 20:03:32.967241 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 20:03:32.967252 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:03:32.967262 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:03:32.967273 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 20:03:32.967285 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 20:03:32.967295 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 20:03:32.967307 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:03:32.967318 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 8 20:03:32.967328 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:03:32.967338 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 20:03:32.967349 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:03:32.967359 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:03:32.967370 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:03:32.967382 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:03:32.967393 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 20:03:32.967404 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 20:03:32.967415 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:03:32.967425 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:03:32.967436 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:03:32.967447 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:03:32.967458 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:03:32.967470 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 20:03:32.967480 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 20:03:32.967491 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 20:03:32.967501 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 20:03:32.967511 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 20:03:32.967521 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 20:03:32.967533 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 20:03:32.967543 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 20:03:32.967554 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:03:32.967566 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:03:32.967577 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 20:03:32.967587 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:03:32.967597 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:03:32.967608 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:03:32.967619 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 20:03:32.967629 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:03:32.967640 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 20:03:32.967652 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 8 20:03:32.967662 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 8 20:03:32.967672 kernel: fuse: init (API version 7.39) Oct 8 20:03:32.967682 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:03:32.967692 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:03:32.967702 kernel: ACPI: bus type drm_connector registered Oct 8 20:03:32.967713 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 20:03:32.967723 kernel: loop: module loaded Oct 8 20:03:32.967733 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 20:03:32.967745 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:03:32.967771 systemd-journald[1142]: Collecting audit messages is disabled. Oct 8 20:03:32.967796 systemd-journald[1142]: Journal started Oct 8 20:03:32.967822 systemd-journald[1142]: Runtime Journal (/run/log/journal/d33163d6fe5f4af09ab235edfcc2c8ad) is 5.9M, max 47.3M, 41.4M free. Oct 8 20:03:32.974536 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:03:32.975463 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 20:03:32.976905 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 20:03:32.977990 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 20:03:32.978810 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 20:03:32.979720 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 20:03:32.980641 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 20:03:32.981838 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 20:03:32.983335 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:03:32.984515 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 20:03:32.984673 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 20:03:32.985849 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:03:32.986025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:03:32.987317 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:03:32.987464 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:03:32.988483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:03:32.988632 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:03:32.990183 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 20:03:32.990348 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 20:03:32.991405 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:03:32.991599 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:03:32.992952 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:03:32.994118 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 20:03:32.995458 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 20:03:33.007299 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 20:03:33.018303 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 20:03:33.020130 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 20:03:33.020992 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 20:03:33.024016 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 20:03:33.026434 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 20:03:33.027339 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:03:33.030512 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 20:03:33.032220 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:03:33.036397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:03:33.040838 systemd-journald[1142]: Time spent on flushing to /var/log/journal/d33163d6fe5f4af09ab235edfcc2c8ad is 18.052ms for 849 entries. Oct 8 20:03:33.040838 systemd-journald[1142]: System Journal (/var/log/journal/d33163d6fe5f4af09ab235edfcc2c8ad) is 8.0M, max 195.6M, 187.6M free. Oct 8 20:03:33.078080 systemd-journald[1142]: Received client request to flush runtime journal. Oct 8 20:03:33.041851 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:03:33.046866 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:03:33.048198 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 20:03:33.049116 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 20:03:33.056799 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 20:03:33.058067 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 20:03:33.059483 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:03:33.061643 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 20:03:33.069819 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 20:03:33.080594 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 20:03:33.082355 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Oct 8 20:03:33.082644 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Oct 8 20:03:33.086935 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:03:33.102359 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 20:03:33.122988 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 20:03:33.135392 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:03:33.147965 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Oct 8 20:03:33.147985 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Oct 8 20:03:33.151477 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:03:33.493911 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 20:03:33.508477 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:03:33.528851 systemd-udevd[1224]: Using default interface naming scheme 'v255'. Oct 8 20:03:33.546751 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:03:33.556392 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:03:33.577857 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1241) Oct 8 20:03:33.579382 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 20:03:33.584186 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Oct 8 20:03:33.610756 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1230) Oct 8 20:03:33.616231 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1230) Oct 8 20:03:33.627871 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 20:03:33.644666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 20:03:33.669425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:03:33.679191 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 20:03:33.681850 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 20:03:33.696590 systemd-networkd[1237]: lo: Link UP Oct 8 20:03:33.696599 systemd-networkd[1237]: lo: Gained carrier Oct 8 20:03:33.697516 systemd-networkd[1237]: Enumeration completed Oct 8 20:03:33.697790 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:03:33.698046 lvm[1265]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:03:33.698356 systemd-networkd[1237]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:03:33.698360 systemd-networkd[1237]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:03:33.701057 systemd-networkd[1237]: eth0: Link UP Oct 8 20:03:33.701066 systemd-networkd[1237]: eth0: Gained carrier Oct 8 20:03:33.701079 systemd-networkd[1237]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:03:33.703357 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 20:03:33.721275 systemd-networkd[1237]: eth0: DHCPv4 address 10.0.0.154/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:03:33.723361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:03:33.725457 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 20:03:33.727431 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:03:33.746602 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 20:03:33.750285 lvm[1275]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:03:33.776834 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 20:03:33.778014 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:03:33.778973 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 20:03:33.779002 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:03:33.779742 systemd[1]: Reached target machines.target - Containers. Oct 8 20:03:33.781409 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 20:03:33.796368 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 20:03:33.798779 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 20:03:33.799812 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:03:33.802383 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 20:03:33.804300 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 20:03:33.808112 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 20:03:33.809920 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 20:03:33.817541 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 20:03:33.820236 kernel: loop0: detected capacity change from 0 to 194512 Oct 8 20:03:33.828372 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 20:03:33.829081 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 20:03:33.834357 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 20:03:33.870240 kernel: loop1: detected capacity change from 0 to 114328 Oct 8 20:03:33.919258 kernel: loop2: detected capacity change from 0 to 114432 Oct 8 20:03:33.959231 kernel: loop3: detected capacity change from 0 to 194512 Oct 8 20:03:33.971251 kernel: loop4: detected capacity change from 0 to 114328 Oct 8 20:03:33.988264 kernel: loop5: detected capacity change from 0 to 114432 Oct 8 20:03:33.992034 (sd-merge)[1301]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 20:03:33.992527 (sd-merge)[1301]: Merged extensions into '/usr'. Oct 8 20:03:33.996042 systemd[1]: Reloading requested from client PID 1288 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 20:03:33.996061 systemd[1]: Reloading... Oct 8 20:03:34.037241 zram_generator::config[1329]: No configuration found. Oct 8 20:03:34.091375 ldconfig[1284]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 20:03:34.136119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:03:34.178837 systemd[1]: Reloading finished in 182 ms. Oct 8 20:03:34.192264 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 20:03:34.193426 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 20:03:34.207437 systemd[1]: Starting ensure-sysext.service... Oct 8 20:03:34.209227 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:03:34.214380 systemd[1]: Reloading requested from client PID 1371 ('systemctl') (unit ensure-sysext.service)... Oct 8 20:03:34.214397 systemd[1]: Reloading... Oct 8 20:03:34.225991 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 20:03:34.226285 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 20:03:34.226965 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 20:03:34.227186 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Oct 8 20:03:34.227253 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Oct 8 20:03:34.229870 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:03:34.229883 systemd-tmpfiles[1378]: Skipping /boot Oct 8 20:03:34.237083 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:03:34.237098 systemd-tmpfiles[1378]: Skipping /boot Oct 8 20:03:34.269367 zram_generator::config[1407]: No configuration found. Oct 8 20:03:34.352477 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:03:34.394772 systemd[1]: Reloading finished in 180 ms. Oct 8 20:03:34.410115 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:03:34.421323 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:03:34.423882 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 20:03:34.426327 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 20:03:34.429436 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:03:34.436355 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 20:03:34.445886 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:03:34.447046 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:03:34.452496 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:03:34.459035 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:03:34.460921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:03:34.461689 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 20:03:34.465037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:03:34.465180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:03:34.467754 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:03:34.467907 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:03:34.474701 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 20:03:34.476037 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:03:34.476221 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:03:34.479971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:03:34.485441 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:03:34.487934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:03:34.489433 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:03:34.493164 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 20:03:34.495692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:03:34.495847 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:03:34.497428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:03:34.497604 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:03:34.502270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:03:34.503492 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:03:34.508599 systemd-resolved[1454]: Positive Trust Anchors: Oct 8 20:03:34.508620 systemd-resolved[1454]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:03:34.508652 systemd-resolved[1454]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:03:34.509516 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:03:34.511460 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:03:34.514566 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:03:34.515665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:03:34.516906 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 20:03:34.517342 augenrules[1497]: No rules Oct 8 20:03:34.518458 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 20:03:34.519618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:03:34.519752 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:03:34.520982 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:03:34.521106 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:03:34.523950 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:03:34.524584 systemd-resolved[1454]: Defaulting to hostname 'linux'. Oct 8 20:03:34.525341 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:03:34.525471 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:03:34.526728 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:03:34.526931 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:03:34.530065 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:03:34.531711 systemd[1]: Finished ensure-sysext.service. Oct 8 20:03:34.535314 systemd[1]: Reached target network.target - Network. Oct 8 20:03:34.535990 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:03:34.536851 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:03:34.536907 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:03:34.548410 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 20:03:34.549230 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:03:34.591234 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 20:03:35.089276 systemd-resolved[1454]: Clock change detected. Flushing caches. Oct 8 20:03:35.089323 systemd-timesyncd[1523]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 20:03:35.089369 systemd-timesyncd[1523]: Initial clock synchronization to Tue 2024-10-08 20:03:35.089217 UTC. Oct 8 20:03:35.089490 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:03:35.090329 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 20:03:35.091224 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 20:03:35.092092 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 20:03:35.092977 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 20:03:35.093007 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:03:35.093677 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 20:03:35.094540 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 20:03:35.095407 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 20:03:35.096294 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:03:35.097565 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 20:03:35.099620 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 20:03:35.101422 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 20:03:35.108009 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 20:03:35.108843 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:03:35.109564 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:03:35.110360 systemd[1]: System is tainted: cgroupsv1 Oct 8 20:03:35.110405 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:03:35.110424 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:03:35.111519 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 20:03:35.113270 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 20:03:35.114894 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 20:03:35.119270 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 20:03:35.120005 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 20:03:35.121019 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 20:03:35.130196 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 20:03:35.132379 jq[1529]: false Oct 8 20:03:35.134504 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 20:03:35.137228 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 20:03:35.141543 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 20:03:35.143987 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 20:03:35.146908 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 20:03:35.149955 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 20:03:35.153673 extend-filesystems[1531]: Found loop3 Oct 8 20:03:35.153673 extend-filesystems[1531]: Found loop4 Oct 8 20:03:35.153673 extend-filesystems[1531]: Found loop5 Oct 8 20:03:35.153673 extend-filesystems[1531]: Found vda Oct 8 20:03:35.153673 extend-filesystems[1531]: Found vda1 Oct 8 20:03:35.153673 extend-filesystems[1531]: Found vda2 Oct 8 20:03:35.153673 extend-filesystems[1531]: Found vda3 Oct 8 20:03:35.153673 extend-filesystems[1531]: Found usr Oct 8 20:03:35.153673 extend-filesystems[1531]: Found vda4 Oct 8 20:03:35.166223 extend-filesystems[1531]: Found vda6 Oct 8 20:03:35.166223 extend-filesystems[1531]: Found vda7 Oct 8 20:03:35.166223 extend-filesystems[1531]: Found vda9 Oct 8 20:03:35.166223 extend-filesystems[1531]: Checking size of /dev/vda9 Oct 8 20:03:35.156645 dbus-daemon[1528]: [system] SELinux support is enabled Oct 8 20:03:35.154649 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 20:03:35.172143 jq[1549]: true Oct 8 20:03:35.154868 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 20:03:35.155125 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 20:03:35.155313 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 20:03:35.158494 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 20:03:35.166461 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 20:03:35.166684 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 20:03:35.184873 extend-filesystems[1531]: Resized partition /dev/vda9 Oct 8 20:03:35.182646 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 20:03:35.191464 jq[1558]: true Oct 8 20:03:35.191597 extend-filesystems[1567]: resize2fs 1.47.1 (20-May-2024) Oct 8 20:03:35.192077 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 20:03:35.192101 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 20:03:35.201146 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 20:03:35.200479 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 20:03:35.200496 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 20:03:35.208419 tar[1556]: linux-arm64/helm Oct 8 20:03:35.214139 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1247) Oct 8 20:03:35.216705 update_engine[1547]: I20241008 20:03:35.216485 1547 main.cc:92] Flatcar Update Engine starting Oct 8 20:03:35.225458 systemd[1]: Started update-engine.service - Update Engine. Oct 8 20:03:35.227323 update_engine[1547]: I20241008 20:03:35.225739 1547 update_check_scheduler.cc:74] Next update check in 3m34s Oct 8 20:03:35.227452 systemd-logind[1544]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 20:03:35.227712 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 20:03:35.228616 systemd-logind[1544]: New seat seat0. Oct 8 20:03:35.233141 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 20:03:35.236292 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 20:03:35.237271 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 20:03:35.255202 extend-filesystems[1567]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 20:03:35.255202 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 20:03:35.255202 extend-filesystems[1567]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 20:03:35.259211 extend-filesystems[1531]: Resized filesystem in /dev/vda9 Oct 8 20:03:35.260293 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 20:03:35.260559 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 20:03:35.280120 bash[1589]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:03:35.281564 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 20:03:35.283522 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 20:03:35.337985 locksmithd[1585]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 20:03:35.411174 containerd[1559]: time="2024-10-08T20:03:35.409552736Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 20:03:35.436904 containerd[1559]: time="2024-10-08T20:03:35.436868416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:03:35.438197 containerd[1559]: time="2024-10-08T20:03:35.438169896Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:03:35.438229 containerd[1559]: time="2024-10-08T20:03:35.438197536Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 20:03:35.438229 containerd[1559]: time="2024-10-08T20:03:35.438212376Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 20:03:35.438382 containerd[1559]: time="2024-10-08T20:03:35.438362216Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 20:03:35.438405 containerd[1559]: time="2024-10-08T20:03:35.438385536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 20:03:35.438453 containerd[1559]: time="2024-10-08T20:03:35.438436016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:03:35.438480 containerd[1559]: time="2024-10-08T20:03:35.438454056Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:03:35.438658 containerd[1559]: time="2024-10-08T20:03:35.438635936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:03:35.438683 containerd[1559]: time="2024-10-08T20:03:35.438657496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 20:03:35.438683 containerd[1559]: time="2024-10-08T20:03:35.438671376Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:03:35.438683 containerd[1559]: time="2024-10-08T20:03:35.438680816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 20:03:35.438780 containerd[1559]: time="2024-10-08T20:03:35.438760496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:03:35.438972 containerd[1559]: time="2024-10-08T20:03:35.438951776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:03:35.439103 containerd[1559]: time="2024-10-08T20:03:35.439082256Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:03:35.439146 containerd[1559]: time="2024-10-08T20:03:35.439102056Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 20:03:35.439219 containerd[1559]: time="2024-10-08T20:03:35.439200216Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 20:03:35.439261 containerd[1559]: time="2024-10-08T20:03:35.439247416Z" level=info msg="metadata content store policy set" policy=shared Oct 8 20:03:35.442496 containerd[1559]: time="2024-10-08T20:03:35.442470176Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 20:03:35.442542 containerd[1559]: time="2024-10-08T20:03:35.442513856Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 20:03:35.442542 containerd[1559]: time="2024-10-08T20:03:35.442530256Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 20:03:35.442577 containerd[1559]: time="2024-10-08T20:03:35.442544296Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 20:03:35.442577 containerd[1559]: time="2024-10-08T20:03:35.442560616Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 20:03:35.442715 containerd[1559]: time="2024-10-08T20:03:35.442691536Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 20:03:35.443256 containerd[1559]: time="2024-10-08T20:03:35.443078056Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 20:03:35.444082 containerd[1559]: time="2024-10-08T20:03:35.444051296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 20:03:35.444109 containerd[1559]: time="2024-10-08T20:03:35.444092976Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 20:03:35.444144 containerd[1559]: time="2024-10-08T20:03:35.444130416Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 20:03:35.444164 containerd[1559]: time="2024-10-08T20:03:35.444152016Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 20:03:35.444183 containerd[1559]: time="2024-10-08T20:03:35.444166736Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 20:03:35.444201 containerd[1559]: time="2024-10-08T20:03:35.444183896Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 20:03:35.444218 containerd[1559]: time="2024-10-08T20:03:35.444202256Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 20:03:35.444236 containerd[1559]: time="2024-10-08T20:03:35.444221256Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 20:03:35.444253 containerd[1559]: time="2024-10-08T20:03:35.444238776Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 20:03:35.444277 containerd[1559]: time="2024-10-08T20:03:35.444255176Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 20:03:35.444277 containerd[1559]: time="2024-10-08T20:03:35.444271016Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 20:03:35.444310 containerd[1559]: time="2024-10-08T20:03:35.444295296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.444327 containerd[1559]: time="2024-10-08T20:03:35.444313296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.444349 containerd[1559]: time="2024-10-08T20:03:35.444326976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.444370 containerd[1559]: time="2024-10-08T20:03:35.444343456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.444370 containerd[1559]: time="2024-10-08T20:03:35.444359616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.444407 containerd[1559]: time="2024-10-08T20:03:35.444377536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.444407 containerd[1559]: time="2024-10-08T20:03:35.444393656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.444447 containerd[1559]: time="2024-10-08T20:03:35.444410016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.444447 containerd[1559]: time="2024-10-08T20:03:35.444424856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.444489 containerd[1559]: time="2024-10-08T20:03:35.444444456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.444489 containerd[1559]: time="2024-10-08T20:03:35.444460616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.444489 containerd[1559]: time="2024-10-08T20:03:35.444476136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.444542 containerd[1559]: time="2024-10-08T20:03:35.444492176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.444542 containerd[1559]: time="2024-10-08T20:03:35.444513216Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 20:03:35.444542 containerd[1559]: time="2024-10-08T20:03:35.444538776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.445140 containerd[1559]: time="2024-10-08T20:03:35.444677656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.445140 containerd[1559]: time="2024-10-08T20:03:35.444698096Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 20:03:35.445140 containerd[1559]: time="2024-10-08T20:03:35.444819736Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 20:03:35.445140 containerd[1559]: time="2024-10-08T20:03:35.444836696Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 20:03:35.445140 containerd[1559]: time="2024-10-08T20:03:35.444847176Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 20:03:35.445140 containerd[1559]: time="2024-10-08T20:03:35.444858176Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 20:03:35.445140 containerd[1559]: time="2024-10-08T20:03:35.444867136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.445140 containerd[1559]: time="2024-10-08T20:03:35.444882536Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 20:03:35.445140 containerd[1559]: time="2024-10-08T20:03:35.444892856Z" level=info msg="NRI interface is disabled by configuration." Oct 8 20:03:35.445140 containerd[1559]: time="2024-10-08T20:03:35.444902936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 20:03:35.445319 containerd[1559]: time="2024-10-08T20:03:35.445256096Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 20:03:35.445319 containerd[1559]: time="2024-10-08T20:03:35.445312336Z" level=info msg="Connect containerd service" Oct 8 20:03:35.445438 containerd[1559]: time="2024-10-08T20:03:35.445342096Z" level=info msg="using legacy CRI server" Oct 8 20:03:35.445438 containerd[1559]: time="2024-10-08T20:03:35.445349016Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 20:03:35.445438 containerd[1559]: time="2024-10-08T20:03:35.445425296Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 20:03:35.447130 containerd[1559]: time="2024-10-08T20:03:35.446155856Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:03:35.447130 containerd[1559]: time="2024-10-08T20:03:35.446349096Z" level=info msg="Start subscribing containerd event" Oct 8 20:03:35.447130 containerd[1559]: time="2024-10-08T20:03:35.446396016Z" level=info msg="Start recovering state" Oct 8 20:03:35.447130 containerd[1559]: time="2024-10-08T20:03:35.446459216Z" level=info msg="Start event monitor" Oct 8 20:03:35.447130 containerd[1559]: time="2024-10-08T20:03:35.446476056Z" level=info msg="Start snapshots syncer" Oct 8 20:03:35.447130 containerd[1559]: time="2024-10-08T20:03:35.446486416Z" level=info msg="Start cni network conf syncer for default" Oct 8 20:03:35.447130 containerd[1559]: time="2024-10-08T20:03:35.446493536Z" level=info msg="Start streaming server" Oct 8 20:03:35.447130 containerd[1559]: time="2024-10-08T20:03:35.447027536Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 20:03:35.447130 containerd[1559]: time="2024-10-08T20:03:35.447072656Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 20:03:35.450288 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 20:03:35.451677 containerd[1559]: time="2024-10-08T20:03:35.451650856Z" level=info msg="containerd successfully booted in 0.044989s" Oct 8 20:03:35.572169 tar[1556]: linux-arm64/LICENSE Oct 8 20:03:35.572250 tar[1556]: linux-arm64/README.md Oct 8 20:03:35.582491 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 20:03:35.684754 sshd_keygen[1554]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 20:03:35.703577 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 20:03:35.715465 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 20:03:35.720445 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 20:03:35.720762 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 20:03:35.723133 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 20:03:35.733922 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 20:03:35.736234 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 20:03:35.737952 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 8 20:03:35.738966 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 20:03:36.147273 systemd-networkd[1237]: eth0: Gained IPv6LL Oct 8 20:03:36.150071 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 20:03:36.151547 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 20:03:36.167378 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 20:03:36.169738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:36.171701 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 20:03:36.189706 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 20:03:36.189973 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 20:03:36.191633 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 20:03:36.196286 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 20:03:36.645569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:36.647180 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 20:03:36.650076 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:03:36.650525 systemd[1]: Startup finished in 5.284s (kernel) + 3.762s (userspace) = 9.046s. Oct 8 20:03:37.125041 kubelet[1664]: E1008 20:03:37.124898 1664 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:03:37.127864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:03:37.128060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:03:40.565831 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 20:03:40.577383 systemd[1]: Started sshd@0-10.0.0.154:22-10.0.0.1:60572.service - OpenSSH per-connection server daemon (10.0.0.1:60572). Oct 8 20:03:40.627300 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 60572 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:40.629107 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:40.642315 systemd-logind[1544]: New session 1 of user core. Oct 8 20:03:40.643206 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 20:03:40.655419 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 20:03:40.665176 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 20:03:40.667389 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 20:03:40.674093 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:03:40.748837 systemd[1685]: Queued start job for default target default.target. Oct 8 20:03:40.749223 systemd[1685]: Created slice app.slice - User Application Slice. Oct 8 20:03:40.749249 systemd[1685]: Reached target paths.target - Paths. Oct 8 20:03:40.749261 systemd[1685]: Reached target timers.target - Timers. Oct 8 20:03:40.759211 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 20:03:40.765028 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 20:03:40.765088 systemd[1685]: Reached target sockets.target - Sockets. Oct 8 20:03:40.765100 systemd[1685]: Reached target basic.target - Basic System. Oct 8 20:03:40.765150 systemd[1685]: Reached target default.target - Main User Target. Oct 8 20:03:40.765176 systemd[1685]: Startup finished in 85ms. Oct 8 20:03:40.765459 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 20:03:40.766790 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 20:03:40.828352 systemd[1]: Started sshd@1-10.0.0.154:22-10.0.0.1:60580.service - OpenSSH per-connection server daemon (10.0.0.1:60580). Oct 8 20:03:40.860073 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 60580 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:40.861323 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:40.865856 systemd-logind[1544]: New session 2 of user core. Oct 8 20:03:40.880484 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 20:03:40.933341 sshd[1697]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:40.947374 systemd[1]: Started sshd@2-10.0.0.154:22-10.0.0.1:60596.service - OpenSSH per-connection server daemon (10.0.0.1:60596). Oct 8 20:03:40.947772 systemd[1]: sshd@1-10.0.0.154:22-10.0.0.1:60580.service: Deactivated successfully. Oct 8 20:03:40.950180 systemd-logind[1544]: Session 2 logged out. Waiting for processes to exit. Oct 8 20:03:40.950907 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 20:03:40.952408 systemd-logind[1544]: Removed session 2. Oct 8 20:03:40.979702 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 60596 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:40.981011 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:40.985424 systemd-logind[1544]: New session 3 of user core. Oct 8 20:03:40.995374 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 20:03:41.044136 sshd[1702]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:41.053362 systemd[1]: Started sshd@3-10.0.0.154:22-10.0.0.1:60606.service - OpenSSH per-connection server daemon (10.0.0.1:60606). Oct 8 20:03:41.053776 systemd[1]: sshd@2-10.0.0.154:22-10.0.0.1:60596.service: Deactivated successfully. Oct 8 20:03:41.055881 systemd-logind[1544]: Session 3 logged out. Waiting for processes to exit. Oct 8 20:03:41.056455 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 20:03:41.057845 systemd-logind[1544]: Removed session 3. Oct 8 20:03:41.085934 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 60606 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:41.087266 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:41.091542 systemd-logind[1544]: New session 4 of user core. Oct 8 20:03:41.106398 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 20:03:41.158470 sshd[1710]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:41.170384 systemd[1]: Started sshd@4-10.0.0.154:22-10.0.0.1:60620.service - OpenSSH per-connection server daemon (10.0.0.1:60620). Oct 8 20:03:41.170794 systemd[1]: sshd@3-10.0.0.154:22-10.0.0.1:60606.service: Deactivated successfully. Oct 8 20:03:41.172604 systemd-logind[1544]: Session 4 logged out. Waiting for processes to exit. Oct 8 20:03:41.173345 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 20:03:41.174727 systemd-logind[1544]: Removed session 4. Oct 8 20:03:41.203604 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 60620 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:41.204902 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:41.209106 systemd-logind[1544]: New session 5 of user core. Oct 8 20:03:41.220388 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 20:03:41.289643 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 20:03:41.289951 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:03:41.313025 sudo[1725]: pam_unix(sudo:session): session closed for user root Oct 8 20:03:41.315071 sshd[1718]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:41.323390 systemd[1]: Started sshd@5-10.0.0.154:22-10.0.0.1:60626.service - OpenSSH per-connection server daemon (10.0.0.1:60626). Oct 8 20:03:41.323801 systemd[1]: sshd@4-10.0.0.154:22-10.0.0.1:60620.service: Deactivated successfully. Oct 8 20:03:41.325587 systemd-logind[1544]: Session 5 logged out. Waiting for processes to exit. Oct 8 20:03:41.326223 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 20:03:41.327841 systemd-logind[1544]: Removed session 5. Oct 8 20:03:41.358412 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 60626 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:41.359543 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:41.364497 systemd-logind[1544]: New session 6 of user core. Oct 8 20:03:41.374420 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 20:03:41.427324 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 20:03:41.427596 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:03:41.431584 sudo[1735]: pam_unix(sudo:session): session closed for user root Oct 8 20:03:41.436381 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 20:03:41.436648 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:03:41.456459 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 20:03:41.457937 auditctl[1738]: No rules Oct 8 20:03:41.458332 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 20:03:41.458583 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 20:03:41.461341 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:03:41.486118 augenrules[1757]: No rules Oct 8 20:03:41.487490 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:03:41.488563 sudo[1734]: pam_unix(sudo:session): session closed for user root Oct 8 20:03:41.490340 sshd[1727]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:41.498358 systemd[1]: Started sshd@6-10.0.0.154:22-10.0.0.1:60634.service - OpenSSH per-connection server daemon (10.0.0.1:60634). Oct 8 20:03:41.498758 systemd[1]: sshd@5-10.0.0.154:22-10.0.0.1:60626.service: Deactivated successfully. Oct 8 20:03:41.501296 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 20:03:41.502628 systemd-logind[1544]: Session 6 logged out. Waiting for processes to exit. Oct 8 20:03:41.503735 systemd-logind[1544]: Removed session 6. Oct 8 20:03:41.534390 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 60634 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:03:41.535785 sshd[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:41.540183 systemd-logind[1544]: New session 7 of user core. Oct 8 20:03:41.556349 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 20:03:41.607407 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 20:03:41.607700 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:03:41.921367 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 20:03:41.921600 (dockerd)[1790]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 20:03:42.178545 dockerd[1790]: time="2024-10-08T20:03:42.178399696Z" level=info msg="Starting up" Oct 8 20:03:42.422896 dockerd[1790]: time="2024-10-08T20:03:42.422844536Z" level=info msg="Loading containers: start." Oct 8 20:03:42.540136 kernel: Initializing XFRM netlink socket Oct 8 20:03:42.601264 systemd-networkd[1237]: docker0: Link UP Oct 8 20:03:42.618232 dockerd[1790]: time="2024-10-08T20:03:42.618188656Z" level=info msg="Loading containers: done." Oct 8 20:03:42.631268 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2715845276-merged.mount: Deactivated successfully. Oct 8 20:03:42.631490 dockerd[1790]: time="2024-10-08T20:03:42.631390216Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 20:03:42.631490 dockerd[1790]: time="2024-10-08T20:03:42.631485496Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 20:03:42.631631 dockerd[1790]: time="2024-10-08T20:03:42.631602416Z" level=info msg="Daemon has completed initialization" Oct 8 20:03:42.655667 dockerd[1790]: time="2024-10-08T20:03:42.655548536Z" level=info msg="API listen on /run/docker.sock" Oct 8 20:03:42.655967 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 20:03:43.263300 containerd[1559]: time="2024-10-08T20:03:43.263260496Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 20:03:43.930387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49997360.mount: Deactivated successfully. Oct 8 20:03:45.278689 containerd[1559]: time="2024-10-08T20:03:45.278631976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:45.279180 containerd[1559]: time="2024-10-08T20:03:45.279145816Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=32286060" Oct 8 20:03:45.280034 containerd[1559]: time="2024-10-08T20:03:45.280007696Z" level=info msg="ImageCreate event name:\"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:45.284152 containerd[1559]: time="2024-10-08T20:03:45.283879616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:45.285141 containerd[1559]: time="2024-10-08T20:03:45.285087256Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"32282858\" in 2.02177852s" Oct 8 20:03:45.285141 containerd[1559]: time="2024-10-08T20:03:45.285138456Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\"" Oct 8 20:03:45.303132 containerd[1559]: time="2024-10-08T20:03:45.303096496Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 20:03:46.611129 containerd[1559]: time="2024-10-08T20:03:46.611072056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:46.612339 containerd[1559]: time="2024-10-08T20:03:46.612309336Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=29374206" Oct 8 20:03:46.612991 containerd[1559]: time="2024-10-08T20:03:46.612687376Z" level=info msg="ImageCreate event name:\"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:46.616185 containerd[1559]: time="2024-10-08T20:03:46.616126736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:46.617359 containerd[1559]: time="2024-10-08T20:03:46.617321616Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"30862018\" in 1.3140924s" Oct 8 20:03:46.617426 containerd[1559]: time="2024-10-08T20:03:46.617361696Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\"" Oct 8 20:03:46.636578 containerd[1559]: time="2024-10-08T20:03:46.636500576Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 20:03:47.378365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 20:03:47.387384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:47.489303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:47.493734 (kubelet)[2032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:03:47.542867 kubelet[2032]: E1008 20:03:47.542810 2032 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:03:47.545957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:03:47.546147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:03:47.735254 containerd[1559]: time="2024-10-08T20:03:47.734312496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:47.735254 containerd[1559]: time="2024-10-08T20:03:47.735170096Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=15751219" Oct 8 20:03:47.735742 containerd[1559]: time="2024-10-08T20:03:47.735699456Z" level=info msg="ImageCreate event name:\"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:47.738919 containerd[1559]: time="2024-10-08T20:03:47.738884256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:47.740157 containerd[1559]: time="2024-10-08T20:03:47.740123536Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"17239049\" in 1.10358036s" Oct 8 20:03:47.740198 containerd[1559]: time="2024-10-08T20:03:47.740159216Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\"" Oct 8 20:03:47.759260 containerd[1559]: time="2024-10-08T20:03:47.759227736Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 20:03:48.700983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3421878346.mount: Deactivated successfully. Oct 8 20:03:49.047927 containerd[1559]: time="2024-10-08T20:03:49.047763536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:49.048698 containerd[1559]: time="2024-10-08T20:03:49.048313696Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=25254040" Oct 8 20:03:49.050255 containerd[1559]: time="2024-10-08T20:03:49.050155856Z" level=info msg="ImageCreate event name:\"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:49.052202 containerd[1559]: time="2024-10-08T20:03:49.052174776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:49.052931 containerd[1559]: time="2024-10-08T20:03:49.052791896Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"25253057\" in 1.29352144s" Oct 8 20:03:49.052931 containerd[1559]: time="2024-10-08T20:03:49.052827536Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\"" Oct 8 20:03:49.071137 containerd[1559]: time="2024-10-08T20:03:49.071057096Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 20:03:49.672075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3578285619.mount: Deactivated successfully. Oct 8 20:03:50.186609 containerd[1559]: time="2024-10-08T20:03:50.186558536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:50.187541 containerd[1559]: time="2024-10-08T20:03:50.187308336Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 8 20:03:50.188213 containerd[1559]: time="2024-10-08T20:03:50.188179096Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:50.191219 containerd[1559]: time="2024-10-08T20:03:50.191186016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:50.192448 containerd[1559]: time="2024-10-08T20:03:50.192419976Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.12132428s" Oct 8 20:03:50.192494 containerd[1559]: time="2024-10-08T20:03:50.192449416Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 20:03:50.210137 containerd[1559]: time="2024-10-08T20:03:50.210095936Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 20:03:50.622940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3529996752.mount: Deactivated successfully. Oct 8 20:03:50.629697 containerd[1559]: time="2024-10-08T20:03:50.629657416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:50.630455 containerd[1559]: time="2024-10-08T20:03:50.630273256Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Oct 8 20:03:50.631282 containerd[1559]: time="2024-10-08T20:03:50.631232336Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:50.634838 containerd[1559]: time="2024-10-08T20:03:50.633286856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:50.634838 containerd[1559]: time="2024-10-08T20:03:50.634431256Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 424.28812ms" Oct 8 20:03:50.634838 containerd[1559]: time="2024-10-08T20:03:50.634458056Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 8 20:03:50.658344 containerd[1559]: time="2024-10-08T20:03:50.658320416Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 20:03:51.186237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261266944.mount: Deactivated successfully. Oct 8 20:03:52.682642 containerd[1559]: time="2024-10-08T20:03:52.682587296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:52.683062 containerd[1559]: time="2024-10-08T20:03:52.683012016Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Oct 8 20:03:52.684006 containerd[1559]: time="2024-10-08T20:03:52.683973216Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:52.687184 containerd[1559]: time="2024-10-08T20:03:52.687150696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:52.688578 containerd[1559]: time="2024-10-08T20:03:52.688539096Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.03018536s" Oct 8 20:03:52.688619 containerd[1559]: time="2024-10-08T20:03:52.688580336Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Oct 8 20:03:57.469748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:57.484315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:57.497991 systemd[1]: Reloading requested from client PID 2249 ('systemctl') (unit session-7.scope)... Oct 8 20:03:57.498008 systemd[1]: Reloading... Oct 8 20:03:57.551150 zram_generator::config[2291]: No configuration found. Oct 8 20:03:57.645060 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:03:57.693249 systemd[1]: Reloading finished in 194 ms. Oct 8 20:03:57.728547 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:57.731354 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:03:57.731586 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:57.733052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:57.855171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:57.859310 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:03:57.902640 kubelet[2348]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:03:57.902640 kubelet[2348]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:03:57.902640 kubelet[2348]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:03:57.902973 kubelet[2348]: I1008 20:03:57.902701 2348 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:03:58.524661 kubelet[2348]: I1008 20:03:58.524613 2348 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 20:03:58.524661 kubelet[2348]: I1008 20:03:58.524641 2348 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:03:58.524887 kubelet[2348]: I1008 20:03:58.524860 2348 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 20:03:58.546788 kubelet[2348]: I1008 20:03:58.546663 2348 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:03:58.546788 kubelet[2348]: E1008 20:03:58.546717 2348 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:58.555999 kubelet[2348]: I1008 20:03:58.555980 2348 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:03:58.557086 kubelet[2348]: I1008 20:03:58.557056 2348 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:03:58.557294 kubelet[2348]: I1008 20:03:58.557269 2348 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:03:58.557294 kubelet[2348]: I1008 20:03:58.557296 2348 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:03:58.557402 kubelet[2348]: I1008 20:03:58.557305 2348 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:03:58.557430 kubelet[2348]: I1008 20:03:58.557406 2348 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:03:58.561269 kubelet[2348]: I1008 20:03:58.561244 2348 kubelet.go:396] "Attempting to sync node with API server" Oct 8 20:03:58.561296 kubelet[2348]: I1008 20:03:58.561272 2348 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:03:58.561296 kubelet[2348]: I1008 20:03:58.561295 2348 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:03:58.561340 kubelet[2348]: I1008 20:03:58.561308 2348 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:03:58.562157 kubelet[2348]: W1008 20:03:58.561699 2348 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:58.562157 kubelet[2348]: E1008 20:03:58.561753 2348 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:58.562259 kubelet[2348]: W1008 20:03:58.562176 2348 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:58.562259 kubelet[2348]: E1008 20:03:58.562216 2348 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:58.562777 kubelet[2348]: I1008 20:03:58.562744 2348 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:03:58.563231 kubelet[2348]: I1008 20:03:58.563203 2348 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:03:58.563361 kubelet[2348]: W1008 20:03:58.563340 2348 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 20:03:58.564202 kubelet[2348]: I1008 20:03:58.564174 2348 server.go:1256] "Started kubelet" Oct 8 20:03:58.564521 kubelet[2348]: I1008 20:03:58.564488 2348 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:03:58.564732 kubelet[2348]: I1008 20:03:58.564705 2348 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:03:58.564970 kubelet[2348]: I1008 20:03:58.564951 2348 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:03:58.565963 kubelet[2348]: I1008 20:03:58.565305 2348 server.go:461] "Adding debug handlers to kubelet server" Oct 8 20:03:58.568670 kubelet[2348]: I1008 20:03:58.568640 2348 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:03:58.570017 kubelet[2348]: E1008 20:03:58.569981 2348 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.154:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.154:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc92dddb108a10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 20:03:58.564149776 +0000 UTC m=+0.701222481,LastTimestamp:2024-10-08 20:03:58.564149776 +0000 UTC m=+0.701222481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 20:03:58.571045 kubelet[2348]: I1008 20:03:58.571023 2348 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:03:58.571137 kubelet[2348]: I1008 20:03:58.571131 2348 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 20:03:58.571197 kubelet[2348]: I1008 20:03:58.571182 2348 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 20:03:58.571455 kubelet[2348]: W1008 20:03:58.571409 2348 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:58.571455 kubelet[2348]: E1008 20:03:58.571453 2348 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:58.572029 kubelet[2348]: E1008 20:03:58.571665 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.154:6443: connect: connection refused" interval="200ms" Oct 8 20:03:58.572961 kubelet[2348]: I1008 20:03:58.572897 2348 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:03:58.573051 kubelet[2348]: I1008 20:03:58.572970 2348 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:03:58.574229 kubelet[2348]: E1008 20:03:58.574107 2348 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:03:58.574396 kubelet[2348]: I1008 20:03:58.574362 2348 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:03:58.584610 kubelet[2348]: I1008 20:03:58.584552 2348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:03:58.586147 kubelet[2348]: I1008 20:03:58.585591 2348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:03:58.586147 kubelet[2348]: I1008 20:03:58.585613 2348 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:03:58.586147 kubelet[2348]: I1008 20:03:58.585628 2348 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 20:03:58.586147 kubelet[2348]: E1008 20:03:58.585684 2348 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:03:58.586267 kubelet[2348]: W1008 20:03:58.586199 2348 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:58.586267 kubelet[2348]: E1008 20:03:58.586240 2348 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:58.593542 kubelet[2348]: I1008 20:03:58.593523 2348 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:03:58.593542 kubelet[2348]: I1008 20:03:58.593540 2348 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:03:58.593631 kubelet[2348]: I1008 20:03:58.593556 2348 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:03:58.673021 kubelet[2348]: I1008 20:03:58.672996 2348 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:03:58.673442 kubelet[2348]: E1008 20:03:58.673422 2348 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.154:6443/api/v1/nodes\": dial tcp 10.0.0.154:6443: connect: connection refused" node="localhost" Oct 8 20:03:58.686573 kubelet[2348]: E1008 20:03:58.686541 2348 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 20:03:58.772216 kubelet[2348]: E1008 20:03:58.772183 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.154:6443: connect: connection refused" interval="400ms" Oct 8 20:03:58.786066 kubelet[2348]: I1008 20:03:58.785974 2348 policy_none.go:49] "None policy: Start" Oct 8 20:03:58.787476 kubelet[2348]: I1008 20:03:58.787442 2348 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:03:58.787551 kubelet[2348]: I1008 20:03:58.787486 2348 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:03:58.791919 kubelet[2348]: I1008 20:03:58.791897 2348 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:03:58.793448 kubelet[2348]: I1008 20:03:58.793427 2348 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:03:58.793533 kubelet[2348]: E1008 20:03:58.793518 2348 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 20:03:58.874464 kubelet[2348]: I1008 20:03:58.874436 2348 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:03:58.874730 kubelet[2348]: E1008 20:03:58.874701 2348 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.154:6443/api/v1/nodes\": dial tcp 10.0.0.154:6443: connect: connection refused" node="localhost" Oct 8 20:03:58.887027 kubelet[2348]: I1008 20:03:58.886986 2348 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 20:03:58.887769 kubelet[2348]: I1008 20:03:58.887739 2348 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 20:03:58.888524 kubelet[2348]: I1008 20:03:58.888487 2348 topology_manager.go:215] "Topology Admit Handler" podUID="ceeaf065544974e867179794469cac03" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 20:03:58.973069 kubelet[2348]: I1008 20:03:58.973017 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:03:58.973395 kubelet[2348]: I1008 20:03:58.973087 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:03:58.973395 kubelet[2348]: I1008 20:03:58.973144 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 20:03:58.973395 kubelet[2348]: I1008 20:03:58.973168 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ceeaf065544974e867179794469cac03-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ceeaf065544974e867179794469cac03\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:03:58.973395 kubelet[2348]: I1008 20:03:58.973189 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:03:58.973395 kubelet[2348]: I1008 20:03:58.973209 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:03:58.973517 kubelet[2348]: I1008 20:03:58.973254 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ceeaf065544974e867179794469cac03-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ceeaf065544974e867179794469cac03\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:03:58.973517 kubelet[2348]: I1008 20:03:58.973313 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ceeaf065544974e867179794469cac03-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ceeaf065544974e867179794469cac03\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:03:58.973517 kubelet[2348]: I1008 20:03:58.973333 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:03:59.173084 kubelet[2348]: E1008 20:03:59.172957 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.154:6443: connect: connection refused" interval="800ms" Oct 8 20:03:59.190562 kubelet[2348]: E1008 20:03:59.190520 2348 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.154:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.154:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc92dddb108a10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 20:03:58.564149776 +0000 UTC m=+0.701222481,LastTimestamp:2024-10-08 20:03:58.564149776 +0000 UTC m=+0.701222481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 20:03:59.192702 kubelet[2348]: E1008 20:03:59.192663 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:59.192772 kubelet[2348]: E1008 20:03:59.192665 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:59.193396 containerd[1559]: time="2024-10-08T20:03:59.193347336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 8 20:03:59.193677 containerd[1559]: time="2024-10-08T20:03:59.193364576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 8 20:03:59.195529 kubelet[2348]: E1008 20:03:59.195498 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:59.196004 containerd[1559]: time="2024-10-08T20:03:59.195805016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ceeaf065544974e867179794469cac03,Namespace:kube-system,Attempt:0,}" Oct 8 20:03:59.277637 kubelet[2348]: I1008 20:03:59.277606 2348 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:03:59.277968 kubelet[2348]: E1008 20:03:59.277950 2348 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.154:6443/api/v1/nodes\": dial tcp 10.0.0.154:6443: connect: connection refused" node="localhost" Oct 8 20:03:59.523434 kubelet[2348]: W1008 20:03:59.523397 2348 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:59.523434 kubelet[2348]: E1008 20:03:59.523438 2348 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:59.634791 kubelet[2348]: W1008 20:03:59.634718 2348 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:59.634791 kubelet[2348]: E1008 20:03:59.634771 2348 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:59.644036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1317475063.mount: Deactivated successfully. Oct 8 20:03:59.647142 containerd[1559]: time="2024-10-08T20:03:59.647076656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:59.648266 containerd[1559]: time="2024-10-08T20:03:59.648233016Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:03:59.648825 containerd[1559]: time="2024-10-08T20:03:59.648787016Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:59.649581 containerd[1559]: time="2024-10-08T20:03:59.649550896Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:59.650303 containerd[1559]: time="2024-10-08T20:03:59.650270096Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:59.650424 containerd[1559]: time="2024-10-08T20:03:59.650386176Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:03:59.651054 containerd[1559]: time="2024-10-08T20:03:59.651021136Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 8 20:03:59.653144 containerd[1559]: time="2024-10-08T20:03:59.653081656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:59.655260 containerd[1559]: time="2024-10-08T20:03:59.655229496Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 461.795ms" Oct 8 20:03:59.656064 containerd[1559]: time="2024-10-08T20:03:59.655898296Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 462.43148ms" Oct 8 20:03:59.657872 containerd[1559]: time="2024-10-08T20:03:59.657831216Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 461.97144ms" Oct 8 20:03:59.751142 kubelet[2348]: W1008 20:03:59.751028 2348 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:59.751142 kubelet[2348]: E1008 20:03:59.751098 2348 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:03:59.795951 containerd[1559]: time="2024-10-08T20:03:59.795786536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:59.795951 containerd[1559]: time="2024-10-08T20:03:59.795839496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:59.795951 containerd[1559]: time="2024-10-08T20:03:59.795867656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:59.796091 containerd[1559]: time="2024-10-08T20:03:59.795953656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:59.797032 containerd[1559]: time="2024-10-08T20:03:59.796764016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:59.797032 containerd[1559]: time="2024-10-08T20:03:59.796820936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:59.797032 containerd[1559]: time="2024-10-08T20:03:59.796836456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:59.797032 containerd[1559]: time="2024-10-08T20:03:59.796925816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:59.798292 containerd[1559]: time="2024-10-08T20:03:59.798050096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:59.798292 containerd[1559]: time="2024-10-08T20:03:59.798096256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:59.798292 containerd[1559]: time="2024-10-08T20:03:59.798110536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:59.798292 containerd[1559]: time="2024-10-08T20:03:59.798193136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:59.846222 containerd[1559]: time="2024-10-08T20:03:59.846181456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ceeaf065544974e867179794469cac03,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f1b132f0501afdb8c4577cff877038b4e569bbea1784d53a9088c0e68a7050b\"" Oct 8 20:03:59.847876 kubelet[2348]: E1008 20:03:59.847851 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:59.851244 containerd[1559]: time="2024-10-08T20:03:59.851213216Z" level=info msg="CreateContainer within sandbox \"4f1b132f0501afdb8c4577cff877038b4e569bbea1784d53a9088c0e68a7050b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 20:03:59.854546 containerd[1559]: time="2024-10-08T20:03:59.854438856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a8c814e5ed2d6891235e2388267c651461e1b1a710fa76589d964d753ccdb1b\"" Oct 8 20:03:59.854895 containerd[1559]: time="2024-10-08T20:03:59.854866296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7e33d869c7121899f1c392ebf24598fac4a4a18b8ba6080d21b87e975be0652\"" Oct 8 20:03:59.855191 kubelet[2348]: E1008 20:03:59.855155 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:59.855364 kubelet[2348]: E1008 20:03:59.855337 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:03:59.856471 containerd[1559]: time="2024-10-08T20:03:59.856445816Z" level=info msg="CreateContainer within sandbox \"6a8c814e5ed2d6891235e2388267c651461e1b1a710fa76589d964d753ccdb1b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 20:03:59.857667 containerd[1559]: time="2024-10-08T20:03:59.857552336Z" level=info msg="CreateContainer within sandbox \"c7e33d869c7121899f1c392ebf24598fac4a4a18b8ba6080d21b87e975be0652\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 20:03:59.868741 containerd[1559]: time="2024-10-08T20:03:59.868686616Z" level=info msg="CreateContainer within sandbox \"4f1b132f0501afdb8c4577cff877038b4e569bbea1784d53a9088c0e68a7050b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de43d085b6810d9ae2bc3e03d91ac9de0ecb7bb9738d402d39e343246b3c058c\"" Oct 8 20:03:59.870450 containerd[1559]: time="2024-10-08T20:03:59.869452176Z" level=info msg="StartContainer for \"de43d085b6810d9ae2bc3e03d91ac9de0ecb7bb9738d402d39e343246b3c058c\"" Oct 8 20:03:59.872218 containerd[1559]: time="2024-10-08T20:03:59.871803216Z" level=info msg="CreateContainer within sandbox \"6a8c814e5ed2d6891235e2388267c651461e1b1a710fa76589d964d753ccdb1b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fef658db81f42d73c12bc2617145033b412e80a6a2601dc5b4552a3b141e8675\"" Oct 8 20:03:59.872307 containerd[1559]: time="2024-10-08T20:03:59.872272536Z" level=info msg="StartContainer for \"fef658db81f42d73c12bc2617145033b412e80a6a2601dc5b4552a3b141e8675\"" Oct 8 20:03:59.874659 containerd[1559]: time="2024-10-08T20:03:59.874484336Z" level=info msg="CreateContainer within sandbox \"c7e33d869c7121899f1c392ebf24598fac4a4a18b8ba6080d21b87e975be0652\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a40c50454a90ba070e7230f357b2b2697d9272858fca4031a07279fb60492857\"" Oct 8 20:03:59.875963 containerd[1559]: time="2024-10-08T20:03:59.874993896Z" level=info msg="StartContainer for \"a40c50454a90ba070e7230f357b2b2697d9272858fca4031a07279fb60492857\"" Oct 8 20:03:59.935625 containerd[1559]: time="2024-10-08T20:03:59.935586776Z" level=info msg="StartContainer for \"a40c50454a90ba070e7230f357b2b2697d9272858fca4031a07279fb60492857\" returns successfully" Oct 8 20:03:59.935817 containerd[1559]: time="2024-10-08T20:03:59.935799096Z" level=info msg="StartContainer for \"fef658db81f42d73c12bc2617145033b412e80a6a2601dc5b4552a3b141e8675\" returns successfully" Oct 8 20:03:59.940984 containerd[1559]: time="2024-10-08T20:03:59.940954736Z" level=info msg="StartContainer for \"de43d085b6810d9ae2bc3e03d91ac9de0ecb7bb9738d402d39e343246b3c058c\" returns successfully" Oct 8 20:03:59.974152 kubelet[2348]: E1008 20:03:59.974105 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.154:6443: connect: connection refused" interval="1.6s" Oct 8 20:04:00.062376 kubelet[2348]: W1008 20:04:00.062260 2348 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:04:00.062518 kubelet[2348]: E1008 20:04:00.062495 2348 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused Oct 8 20:04:00.080898 kubelet[2348]: I1008 20:04:00.080880 2348 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:04:00.081861 kubelet[2348]: E1008 20:04:00.081766 2348 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.154:6443/api/v1/nodes\": dial tcp 10.0.0.154:6443: connect: connection refused" node="localhost" Oct 8 20:04:00.595616 kubelet[2348]: E1008 20:04:00.595587 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:00.600896 kubelet[2348]: E1008 20:04:00.600874 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:00.601756 kubelet[2348]: E1008 20:04:00.601734 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:01.580894 kubelet[2348]: E1008 20:04:01.580853 2348 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 8 20:04:01.604541 kubelet[2348]: E1008 20:04:01.604511 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:01.604629 kubelet[2348]: E1008 20:04:01.604590 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:01.683622 kubelet[2348]: I1008 20:04:01.683583 2348 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:04:01.694424 kubelet[2348]: I1008 20:04:01.694305 2348 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 20:04:01.700876 kubelet[2348]: E1008 20:04:01.700852 2348 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:04:01.801579 kubelet[2348]: E1008 20:04:01.801544 2348 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:04:01.902742 kubelet[2348]: E1008 20:04:01.902631 2348 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:04:02.003125 kubelet[2348]: E1008 20:04:02.003084 2348 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 20:04:02.382098 kubelet[2348]: E1008 20:04:02.382058 2348 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 8 20:04:02.382549 kubelet[2348]: E1008 20:04:02.382524 2348 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:02.563887 kubelet[2348]: I1008 20:04:02.563836 2348 apiserver.go:52] "Watching apiserver" Oct 8 20:04:02.571619 kubelet[2348]: I1008 20:04:02.571575 2348 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 20:04:03.984488 systemd[1]: Reloading requested from client PID 2625 ('systemctl') (unit session-7.scope)... Oct 8 20:04:03.984503 systemd[1]: Reloading... Oct 8 20:04:04.048186 zram_generator::config[2667]: No configuration found. Oct 8 20:04:04.133128 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:04:04.205221 systemd[1]: Reloading finished in 220 ms. Oct 8 20:04:04.235673 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:04:04.246933 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:04:04.247270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:04:04.258456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:04:04.343204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:04:04.346870 (kubelet)[2716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:04:04.392239 kubelet[2716]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:04:04.393136 kubelet[2716]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:04:04.393136 kubelet[2716]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:04:04.393136 kubelet[2716]: I1008 20:04:04.392632 2716 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:04:04.396618 kubelet[2716]: I1008 20:04:04.396588 2716 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 20:04:04.396618 kubelet[2716]: I1008 20:04:04.396615 2716 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:04:04.396930 kubelet[2716]: I1008 20:04:04.396802 2716 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 20:04:04.399665 kubelet[2716]: I1008 20:04:04.399632 2716 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 20:04:04.402288 kubelet[2716]: I1008 20:04:04.402261 2716 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:04:04.409289 kubelet[2716]: I1008 20:04:04.409249 2716 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:04:04.409685 kubelet[2716]: I1008 20:04:04.409661 2716 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:04:04.409846 kubelet[2716]: I1008 20:04:04.409824 2716 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:04:04.409917 kubelet[2716]: I1008 20:04:04.409849 2716 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:04:04.409917 kubelet[2716]: I1008 20:04:04.409875 2716 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:04:04.409917 kubelet[2716]: I1008 20:04:04.409903 2716 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:04:04.410010 kubelet[2716]: I1008 20:04:04.409998 2716 kubelet.go:396] "Attempting to sync node with API server" Oct 8 20:04:04.410033 kubelet[2716]: I1008 20:04:04.410017 2716 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:04:04.410052 kubelet[2716]: I1008 20:04:04.410037 2716 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:04:04.410559 kubelet[2716]: I1008 20:04:04.410478 2716 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:04:04.411146 kubelet[2716]: I1008 20:04:04.410907 2716 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:04:04.414118 kubelet[2716]: I1008 20:04:04.411532 2716 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:04:04.414118 kubelet[2716]: I1008 20:04:04.411911 2716 server.go:1256] "Started kubelet" Oct 8 20:04:04.414118 kubelet[2716]: I1008 20:04:04.412192 2716 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:04:04.414118 kubelet[2716]: I1008 20:04:04.412893 2716 server.go:461] "Adding debug handlers to kubelet server" Oct 8 20:04:04.414118 kubelet[2716]: I1008 20:04:04.412989 2716 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:04:04.414118 kubelet[2716]: I1008 20:04:04.413750 2716 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:04:04.414118 kubelet[2716]: I1008 20:04:04.413891 2716 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:04:04.416780 kubelet[2716]: I1008 20:04:04.414479 2716 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:04:04.416780 kubelet[2716]: I1008 20:04:04.414576 2716 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 20:04:04.416780 kubelet[2716]: I1008 20:04:04.414711 2716 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 20:04:04.416895 kubelet[2716]: I1008 20:04:04.416867 2716 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:04:04.417581 kubelet[2716]: I1008 20:04:04.416936 2716 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:04:04.421125 kubelet[2716]: E1008 20:04:04.420782 2716 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:04:04.423183 sudo[2737]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 8 20:04:04.423460 sudo[2737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 8 20:04:04.423539 kubelet[2716]: I1008 20:04:04.423510 2716 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:04:04.423993 kubelet[2716]: I1008 20:04:04.423858 2716 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:04:04.439060 kubelet[2716]: I1008 20:04:04.438820 2716 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:04:04.439060 kubelet[2716]: I1008 20:04:04.438847 2716 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:04:04.439060 kubelet[2716]: I1008 20:04:04.438870 2716 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 20:04:04.439060 kubelet[2716]: E1008 20:04:04.438920 2716 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:04:04.481572 kubelet[2716]: I1008 20:04:04.481545 2716 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:04:04.481572 kubelet[2716]: I1008 20:04:04.481569 2716 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:04:04.481722 kubelet[2716]: I1008 20:04:04.481587 2716 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:04:04.481743 kubelet[2716]: I1008 20:04:04.481731 2716 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 20:04:04.481762 kubelet[2716]: I1008 20:04:04.481749 2716 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 20:04:04.481762 kubelet[2716]: I1008 20:04:04.481755 2716 policy_none.go:49] "None policy: Start" Oct 8 20:04:04.482714 kubelet[2716]: I1008 20:04:04.482700 2716 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:04:04.482766 kubelet[2716]: I1008 20:04:04.482723 2716 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:04:04.482865 kubelet[2716]: I1008 20:04:04.482855 2716 state_mem.go:75] "Updated machine memory state" Oct 8 20:04:04.483964 kubelet[2716]: I1008 20:04:04.483942 2716 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:04:04.484183 kubelet[2716]: I1008 20:04:04.484161 2716 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:04:04.518462 kubelet[2716]: I1008 20:04:04.518361 2716 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 20:04:04.525592 kubelet[2716]: I1008 20:04:04.525554 2716 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 8 20:04:04.525709 kubelet[2716]: I1008 20:04:04.525630 2716 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 20:04:04.539580 kubelet[2716]: I1008 20:04:04.539559 2716 topology_manager.go:215] "Topology Admit Handler" podUID="ceeaf065544974e867179794469cac03" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 20:04:04.539771 kubelet[2716]: I1008 20:04:04.539757 2716 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 20:04:04.541003 kubelet[2716]: I1008 20:04:04.540314 2716 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 20:04:04.715698 kubelet[2716]: I1008 20:04:04.715663 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 20:04:04.715882 kubelet[2716]: I1008 20:04:04.715869 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ceeaf065544974e867179794469cac03-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ceeaf065544974e867179794469cac03\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:04:04.715958 kubelet[2716]: I1008 20:04:04.715950 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:04:04.716023 kubelet[2716]: I1008 20:04:04.716015 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:04:04.716217 kubelet[2716]: I1008 20:04:04.716079 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:04:04.716217 kubelet[2716]: I1008 20:04:04.716101 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:04:04.716217 kubelet[2716]: I1008 20:04:04.716136 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ceeaf065544974e867179794469cac03-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ceeaf065544974e867179794469cac03\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:04:04.716217 kubelet[2716]: I1008 20:04:04.716159 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ceeaf065544974e867179794469cac03-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ceeaf065544974e867179794469cac03\") " pod="kube-system/kube-apiserver-localhost" Oct 8 20:04:04.716217 kubelet[2716]: I1008 20:04:04.716183 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 20:04:04.846264 kubelet[2716]: E1008 20:04:04.846023 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:04.846349 kubelet[2716]: E1008 20:04:04.846279 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:04.847013 kubelet[2716]: E1008 20:04:04.846966 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:04.866713 sudo[2737]: pam_unix(sudo:session): session closed for user root Oct 8 20:04:05.411216 kubelet[2716]: I1008 20:04:05.411174 2716 apiserver.go:52] "Watching apiserver" Oct 8 20:04:05.455051 kubelet[2716]: E1008 20:04:05.455017 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:05.456885 kubelet[2716]: E1008 20:04:05.456864 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:05.464321 kubelet[2716]: E1008 20:04:05.464290 2716 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 8 20:04:05.465695 kubelet[2716]: E1008 20:04:05.465679 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:05.484758 kubelet[2716]: I1008 20:04:05.482367 2716 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.482328536 podStartE2EDuration="1.482328536s" podCreationTimestamp="2024-10-08 20:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:05.480131296 +0000 UTC m=+1.129973881" watchObservedRunningTime="2024-10-08 20:04:05.482328536 +0000 UTC m=+1.132171121" Oct 8 20:04:05.494493 kubelet[2716]: I1008 20:04:05.494444 2716 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.494406096 podStartE2EDuration="1.494406096s" podCreationTimestamp="2024-10-08 20:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:05.487292216 +0000 UTC m=+1.137134881" watchObservedRunningTime="2024-10-08 20:04:05.494406096 +0000 UTC m=+1.144248641" Oct 8 20:04:05.515097 kubelet[2716]: I1008 20:04:05.515069 2716 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 20:04:06.455957 kubelet[2716]: E1008 20:04:06.455929 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:06.458320 kubelet[2716]: E1008 20:04:06.458275 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:07.224019 sudo[1770]: pam_unix(sudo:session): session closed for user root Oct 8 20:04:07.226503 sshd[1763]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:07.229290 systemd[1]: sshd@6-10.0.0.154:22-10.0.0.1:60634.service: Deactivated successfully. Oct 8 20:04:07.231835 systemd-logind[1544]: Session 7 logged out. Waiting for processes to exit. Oct 8 20:04:07.232376 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 20:04:07.233198 systemd-logind[1544]: Removed session 7. Oct 8 20:04:08.841806 kubelet[2716]: E1008 20:04:08.841768 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:10.317529 kubelet[2716]: E1008 20:04:10.317445 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:10.333812 kubelet[2716]: I1008 20:04:10.333764 2716 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.333729408 podStartE2EDuration="6.333729408s" podCreationTimestamp="2024-10-08 20:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:05.494561776 +0000 UTC m=+1.144404361" watchObservedRunningTime="2024-10-08 20:04:10.333729408 +0000 UTC m=+5.983571993" Oct 8 20:04:10.463034 kubelet[2716]: E1008 20:04:10.462838 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:15.060886 kubelet[2716]: E1008 20:04:15.060595 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:18.748075 kubelet[2716]: I1008 20:04:18.748033 2716 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 20:04:18.748692 containerd[1559]: time="2024-10-08T20:04:18.748602181Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 20:04:18.748962 kubelet[2716]: I1008 20:04:18.748796 2716 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 20:04:18.849002 kubelet[2716]: E1008 20:04:18.848976 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:19.427484 kubelet[2716]: I1008 20:04:19.427412 2716 topology_manager.go:215] "Topology Admit Handler" podUID="51f02bf4-e9b5-4f8b-92e3-0d3f47f463eb" podNamespace="kube-system" podName="kube-proxy-htpjg" Oct 8 20:04:19.436169 kubelet[2716]: I1008 20:04:19.434817 2716 topology_manager.go:215] "Topology Admit Handler" podUID="d783fb45-9f0c-4534-801b-f71cbf6beb35" podNamespace="kube-system" podName="cilium-65mkw" Oct 8 20:04:19.473957 kubelet[2716]: E1008 20:04:19.473922 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:19.517942 kubelet[2716]: I1008 20:04:19.517908 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-host-proc-sys-net\") pod \"cilium-65mkw\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " pod="kube-system/cilium-65mkw" Oct 8 20:04:19.518035 kubelet[2716]: I1008 20:04:19.517952 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-hostproc\") pod \"cilium-65mkw\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " pod="kube-system/cilium-65mkw" Oct 8 20:04:19.518035 kubelet[2716]: I1008 20:04:19.517979 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d783fb45-9f0c-4534-801b-f71cbf6beb35-cilium-config-path\") pod \"cilium-65mkw\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " pod="kube-system/cilium-65mkw" Oct 8 20:04:19.518035 kubelet[2716]: I1008 20:04:19.518000 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d783fb45-9f0c-4534-801b-f71cbf6beb35-hubble-tls\") pod \"cilium-65mkw\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " pod="kube-system/cilium-65mkw" Oct 8 20:04:19.518035 kubelet[2716]: I1008 20:04:19.518020 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51f02bf4-e9b5-4f8b-92e3-0d3f47f463eb-kube-proxy\") pod \"kube-proxy-htpjg\" (UID: \"51f02bf4-e9b5-4f8b-92e3-0d3f47f463eb\") " pod="kube-system/kube-proxy-htpjg" Oct 8 20:04:19.518035 kubelet[2716]: I1008 20:04:19.518037 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-lib-modules\") pod \"cilium-65mkw\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " pod="kube-system/cilium-65mkw" Oct 8 20:04:19.518169 kubelet[2716]: I1008 20:04:19.518056 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-cilium-cgroup\") pod \"cilium-65mkw\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " pod="kube-system/cilium-65mkw" Oct 8 20:04:19.518169 kubelet[2716]: I1008 20:04:19.518075 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-cilium-run\") pod \"cilium-65mkw\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " pod="kube-system/cilium-65mkw" Oct 8 20:04:19.518169 kubelet[2716]: I1008 20:04:19.518096 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s8b4\" (UniqueName: \"kubernetes.io/projected/51f02bf4-e9b5-4f8b-92e3-0d3f47f463eb-kube-api-access-4s8b4\") pod \"kube-proxy-htpjg\" (UID: \"51f02bf4-e9b5-4f8b-92e3-0d3f47f463eb\") " pod="kube-system/kube-proxy-htpjg" Oct 8 20:04:19.518169 kubelet[2716]: I1008 20:04:19.518128 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-bpf-maps\") pod \"cilium-65mkw\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " pod="kube-system/cilium-65mkw" Oct 8 20:04:19.518169 kubelet[2716]: I1008 20:04:19.518150 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lnjf\" (UniqueName: \"kubernetes.io/projected/d783fb45-9f0c-4534-801b-f71cbf6beb35-kube-api-access-8lnjf\") pod \"cilium-65mkw\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " pod="kube-system/cilium-65mkw" Oct 8 20:04:19.518270 kubelet[2716]: I1008 20:04:19.518169 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51f02bf4-e9b5-4f8b-92e3-0d3f47f463eb-lib-modules\") pod \"kube-proxy-htpjg\" (UID: \"51f02bf4-e9b5-4f8b-92e3-0d3f47f463eb\") " pod="kube-system/kube-proxy-htpjg" Oct 8 20:04:19.518270 kubelet[2716]: I1008 20:04:19.518188 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51f02bf4-e9b5-4f8b-92e3-0d3f47f463eb-xtables-lock\") pod \"kube-proxy-htpjg\" (UID: \"51f02bf4-e9b5-4f8b-92e3-0d3f47f463eb\") " pod="kube-system/kube-proxy-htpjg" Oct 8 20:04:19.518270 kubelet[2716]: I1008 20:04:19.518207 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-host-proc-sys-kernel\") pod \"cilium-65mkw\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " pod="kube-system/cilium-65mkw" Oct 8 20:04:19.518270 kubelet[2716]: I1008 20:04:19.518226 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-xtables-lock\") pod \"cilium-65mkw\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " pod="kube-system/cilium-65mkw" Oct 8 20:04:19.518270 kubelet[2716]: I1008 20:04:19.518247 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-etc-cni-netd\") pod \"cilium-65mkw\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " pod="kube-system/cilium-65mkw" Oct 8 20:04:19.518369 kubelet[2716]: I1008 20:04:19.518266 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d783fb45-9f0c-4534-801b-f71cbf6beb35-clustermesh-secrets\") pod \"cilium-65mkw\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " pod="kube-system/cilium-65mkw" Oct 8 20:04:19.518369 kubelet[2716]: I1008 20:04:19.518285 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-cni-path\") pod \"cilium-65mkw\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " pod="kube-system/cilium-65mkw" Oct 8 20:04:19.726738 kubelet[2716]: I1008 20:04:19.726696 2716 topology_manager.go:215] "Topology Admit Handler" podUID="4d706d7f-20c9-4e49-a33f-b1bc90644f25" podNamespace="kube-system" podName="cilium-operator-5cc964979-xkgt5" Oct 8 20:04:19.735040 kubelet[2716]: E1008 20:04:19.735005 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:19.735992 containerd[1559]: time="2024-10-08T20:04:19.735630507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htpjg,Uid:51f02bf4-e9b5-4f8b-92e3-0d3f47f463eb,Namespace:kube-system,Attempt:0,}" Oct 8 20:04:19.742984 kubelet[2716]: E1008 20:04:19.742199 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:19.743347 containerd[1559]: time="2024-10-08T20:04:19.743300069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-65mkw,Uid:d783fb45-9f0c-4534-801b-f71cbf6beb35,Namespace:kube-system,Attempt:0,}" Oct 8 20:04:19.762280 containerd[1559]: time="2024-10-08T20:04:19.762200375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:19.762616 containerd[1559]: time="2024-10-08T20:04:19.762287974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:19.762616 containerd[1559]: time="2024-10-08T20:04:19.762323134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:19.762616 containerd[1559]: time="2024-10-08T20:04:19.762458533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:19.774594 containerd[1559]: time="2024-10-08T20:04:19.774482114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:19.774594 containerd[1559]: time="2024-10-08T20:04:19.774552473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:19.774594 containerd[1559]: time="2024-10-08T20:04:19.774564473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:19.774782 containerd[1559]: time="2024-10-08T20:04:19.774650153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:19.801379 containerd[1559]: time="2024-10-08T20:04:19.801339700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htpjg,Uid:51f02bf4-e9b5-4f8b-92e3-0d3f47f463eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5eca4be866363108d0ecb34822b5ebc1c7d1fc8ccaf733992c62f3c233c4adbc\"" Oct 8 20:04:19.805473 kubelet[2716]: E1008 20:04:19.805451 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:19.808150 containerd[1559]: time="2024-10-08T20:04:19.807691348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-65mkw,Uid:d783fb45-9f0c-4534-801b-f71cbf6beb35,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0\"" Oct 8 20:04:19.808297 kubelet[2716]: E1008 20:04:19.808278 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:19.810068 containerd[1559]: time="2024-10-08T20:04:19.809919617Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 8 20:04:19.811053 containerd[1559]: time="2024-10-08T20:04:19.810903812Z" level=info msg="CreateContainer within sandbox \"5eca4be866363108d0ecb34822b5ebc1c7d1fc8ccaf733992c62f3c233c4adbc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 20:04:19.822964 containerd[1559]: time="2024-10-08T20:04:19.822934192Z" level=info msg="CreateContainer within sandbox \"5eca4be866363108d0ecb34822b5ebc1c7d1fc8ccaf733992c62f3c233c4adbc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc097278c5a475209246e636af26239584fd79e76018203dd71be15bfd541eea\"" Oct 8 20:04:19.824399 kubelet[2716]: I1008 20:04:19.824378 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d706d7f-20c9-4e49-a33f-b1bc90644f25-cilium-config-path\") pod \"cilium-operator-5cc964979-xkgt5\" (UID: \"4d706d7f-20c9-4e49-a33f-b1bc90644f25\") " pod="kube-system/cilium-operator-5cc964979-xkgt5" Oct 8 20:04:19.824446 kubelet[2716]: I1008 20:04:19.824417 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkk94\" (UniqueName: \"kubernetes.io/projected/4d706d7f-20c9-4e49-a33f-b1bc90644f25-kube-api-access-qkk94\") pod \"cilium-operator-5cc964979-xkgt5\" (UID: \"4d706d7f-20c9-4e49-a33f-b1bc90644f25\") " pod="kube-system/cilium-operator-5cc964979-xkgt5" Oct 8 20:04:19.824554 containerd[1559]: time="2024-10-08T20:04:19.823436310Z" level=info msg="StartContainer for \"bc097278c5a475209246e636af26239584fd79e76018203dd71be15bfd541eea\"" Oct 8 20:04:19.873805 containerd[1559]: time="2024-10-08T20:04:19.873769500Z" level=info msg="StartContainer for \"bc097278c5a475209246e636af26239584fd79e76018203dd71be15bfd541eea\" returns successfully" Oct 8 20:04:20.036251 kubelet[2716]: E1008 20:04:20.036137 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:20.037044 containerd[1559]: time="2024-10-08T20:04:20.036550581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-xkgt5,Uid:4d706d7f-20c9-4e49-a33f-b1bc90644f25,Namespace:kube-system,Attempt:0,}" Oct 8 20:04:20.054704 containerd[1559]: time="2024-10-08T20:04:20.054613617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:20.054704 containerd[1559]: time="2024-10-08T20:04:20.054671776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:20.054704 containerd[1559]: time="2024-10-08T20:04:20.054682776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:20.054922 containerd[1559]: time="2024-10-08T20:04:20.054767656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:20.102229 containerd[1559]: time="2024-10-08T20:04:20.102192355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-xkgt5,Uid:4d706d7f-20c9-4e49-a33f-b1bc90644f25,Namespace:kube-system,Attempt:0,} returns sandbox id \"5474aec4d189d5be1043604334c718a169fa538b8b9514d3059e2a2859e16877\"" Oct 8 20:04:20.102761 kubelet[2716]: E1008 20:04:20.102736 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:20.478096 kubelet[2716]: E1008 20:04:20.478056 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:20.617230 update_engine[1547]: I20241008 20:04:20.617158 1547 update_attempter.cc:509] Updating boot flags... Oct 8 20:04:20.645142 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3086) Oct 8 20:04:20.682355 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3086) Oct 8 20:04:20.713160 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3086) Oct 8 20:04:23.145881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3042013887.mount: Deactivated successfully. Oct 8 20:04:24.404454 containerd[1559]: time="2024-10-08T20:04:24.404408395Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:24.406241 containerd[1559]: time="2024-10-08T20:04:24.406026509Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651502" Oct 8 20:04:24.406994 containerd[1559]: time="2024-10-08T20:04:24.406952946Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:24.408933 containerd[1559]: time="2024-10-08T20:04:24.408810579Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.598849882s" Oct 8 20:04:24.408933 containerd[1559]: time="2024-10-08T20:04:24.408844459Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 8 20:04:24.417531 containerd[1559]: time="2024-10-08T20:04:24.417260869Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 8 20:04:24.420151 containerd[1559]: time="2024-10-08T20:04:24.419700500Z" level=info msg="CreateContainer within sandbox \"a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 20:04:24.458305 containerd[1559]: time="2024-10-08T20:04:24.458258161Z" level=info msg="CreateContainer within sandbox \"a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28\"" Oct 8 20:04:24.459229 containerd[1559]: time="2024-10-08T20:04:24.458684719Z" level=info msg="StartContainer for \"6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28\"" Oct 8 20:04:24.500416 containerd[1559]: time="2024-10-08T20:04:24.500354249Z" level=info msg="StartContainer for \"6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28\" returns successfully" Oct 8 20:04:24.531198 kubelet[2716]: E1008 20:04:24.531163 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:24.570730 kubelet[2716]: I1008 20:04:24.568857 2716 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-htpjg" podStartSLOduration=5.568814643 podStartE2EDuration="5.568814643s" podCreationTimestamp="2024-10-08 20:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:20.485783726 +0000 UTC m=+16.135626311" watchObservedRunningTime="2024-10-08 20:04:24.568814643 +0000 UTC m=+20.218657228" Oct 8 20:04:24.574445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28-rootfs.mount: Deactivated successfully. Oct 8 20:04:24.735749 containerd[1559]: time="2024-10-08T20:04:24.729466344Z" level=info msg="shim disconnected" id=6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28 namespace=k8s.io Oct 8 20:04:24.735749 containerd[1559]: time="2024-10-08T20:04:24.735668721Z" level=warning msg="cleaning up after shim disconnected" id=6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28 namespace=k8s.io Oct 8 20:04:24.735749 containerd[1559]: time="2024-10-08T20:04:24.735682281Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:04:25.532588 kubelet[2716]: E1008 20:04:25.532561 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:25.539374 containerd[1559]: time="2024-10-08T20:04:25.539329427Z" level=info msg="CreateContainer within sandbox \"a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 20:04:25.569462 containerd[1559]: time="2024-10-08T20:04:25.569420406Z" level=info msg="CreateContainer within sandbox \"a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72\"" Oct 8 20:04:25.570380 containerd[1559]: time="2024-10-08T20:04:25.570062084Z" level=info msg="StartContainer for \"430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72\"" Oct 8 20:04:25.616897 containerd[1559]: time="2024-10-08T20:04:25.616851925Z" level=info msg="StartContainer for \"430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72\" returns successfully" Oct 8 20:04:25.638755 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:04:25.639767 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:04:25.639836 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:04:25.649447 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:04:25.663149 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:04:25.672869 containerd[1559]: time="2024-10-08T20:04:25.672766217Z" level=info msg="shim disconnected" id=430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72 namespace=k8s.io Oct 8 20:04:25.672869 containerd[1559]: time="2024-10-08T20:04:25.672834536Z" level=warning msg="cleaning up after shim disconnected" id=430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72 namespace=k8s.io Oct 8 20:04:25.672869 containerd[1559]: time="2024-10-08T20:04:25.672843456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:04:25.848686 containerd[1559]: time="2024-10-08T20:04:25.848579143Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:25.849080 containerd[1559]: time="2024-10-08T20:04:25.849046581Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138302" Oct 8 20:04:25.850609 containerd[1559]: time="2024-10-08T20:04:25.850570696Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:25.852258 containerd[1559]: time="2024-10-08T20:04:25.851888692Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.434588344s" Oct 8 20:04:25.852258 containerd[1559]: time="2024-10-08T20:04:25.851933331Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 8 20:04:25.853458 containerd[1559]: time="2024-10-08T20:04:25.853430566Z" level=info msg="CreateContainer within sandbox \"5474aec4d189d5be1043604334c718a169fa538b8b9514d3059e2a2859e16877\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 8 20:04:25.860903 containerd[1559]: time="2024-10-08T20:04:25.860862421Z" level=info msg="CreateContainer within sandbox \"5474aec4d189d5be1043604334c718a169fa538b8b9514d3059e2a2859e16877\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c\"" Oct 8 20:04:25.861908 containerd[1559]: time="2024-10-08T20:04:25.861336780Z" level=info msg="StartContainer for \"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c\"" Oct 8 20:04:25.902189 containerd[1559]: time="2024-10-08T20:04:25.902102882Z" level=info msg="StartContainer for \"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c\" returns successfully" Oct 8 20:04:26.447926 systemd[1]: run-containerd-runc-k8s.io-430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72-runc.Sgss6D.mount: Deactivated successfully. Oct 8 20:04:26.448056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72-rootfs.mount: Deactivated successfully. Oct 8 20:04:26.536608 kubelet[2716]: E1008 20:04:26.536553 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:26.541067 kubelet[2716]: E1008 20:04:26.541037 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:26.544493 containerd[1559]: time="2024-10-08T20:04:26.544166348Z" level=info msg="CreateContainer within sandbox \"a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 20:04:26.545379 kubelet[2716]: I1008 20:04:26.545356 2716 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-xkgt5" podStartSLOduration=1.796565923 podStartE2EDuration="7.545323185s" podCreationTimestamp="2024-10-08 20:04:19 +0000 UTC" firstStartedPulling="2024-10-08 20:04:20.103366709 +0000 UTC m=+15.753209254" lastFinishedPulling="2024-10-08 20:04:25.852123931 +0000 UTC m=+21.501966516" observedRunningTime="2024-10-08 20:04:26.545170785 +0000 UTC m=+22.195013370" watchObservedRunningTime="2024-10-08 20:04:26.545323185 +0000 UTC m=+22.195165770" Oct 8 20:04:26.566274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount613586087.mount: Deactivated successfully. Oct 8 20:04:26.573427 containerd[1559]: time="2024-10-08T20:04:26.572888697Z" level=info msg="CreateContainer within sandbox \"a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45\"" Oct 8 20:04:26.573523 containerd[1559]: time="2024-10-08T20:04:26.573497055Z" level=info msg="StartContainer for \"ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45\"" Oct 8 20:04:26.652842 containerd[1559]: time="2024-10-08T20:04:26.652785724Z" level=info msg="StartContainer for \"ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45\" returns successfully" Oct 8 20:04:26.734218 containerd[1559]: time="2024-10-08T20:04:26.733999507Z" level=info msg="shim disconnected" id=ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45 namespace=k8s.io Oct 8 20:04:26.734218 containerd[1559]: time="2024-10-08T20:04:26.734059747Z" level=warning msg="cleaning up after shim disconnected" id=ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45 namespace=k8s.io Oct 8 20:04:26.734218 containerd[1559]: time="2024-10-08T20:04:26.734071147Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:04:27.457937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45-rootfs.mount: Deactivated successfully. Oct 8 20:04:27.544821 kubelet[2716]: E1008 20:04:27.544773 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:27.551622 kubelet[2716]: E1008 20:04:27.551574 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:27.555190 containerd[1559]: time="2024-10-08T20:04:27.555023337Z" level=info msg="CreateContainer within sandbox \"a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 20:04:27.570491 containerd[1559]: time="2024-10-08T20:04:27.570439771Z" level=info msg="CreateContainer within sandbox \"a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f\"" Oct 8 20:04:27.572451 containerd[1559]: time="2024-10-08T20:04:27.571685728Z" level=info msg="StartContainer for \"27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f\"" Oct 8 20:04:27.621958 containerd[1559]: time="2024-10-08T20:04:27.621607540Z" level=info msg="StartContainer for \"27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f\" returns successfully" Oct 8 20:04:27.639788 containerd[1559]: time="2024-10-08T20:04:27.639734286Z" level=info msg="shim disconnected" id=27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f namespace=k8s.io Oct 8 20:04:27.639788 containerd[1559]: time="2024-10-08T20:04:27.639783566Z" level=warning msg="cleaning up after shim disconnected" id=27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f namespace=k8s.io Oct 8 20:04:27.639788 containerd[1559]: time="2024-10-08T20:04:27.639792126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:04:28.445771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f-rootfs.mount: Deactivated successfully. Oct 8 20:04:28.547714 kubelet[2716]: E1008 20:04:28.547690 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:28.550335 containerd[1559]: time="2024-10-08T20:04:28.550297165Z" level=info msg="CreateContainer within sandbox \"a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 20:04:28.566245 containerd[1559]: time="2024-10-08T20:04:28.566201241Z" level=info msg="CreateContainer within sandbox \"a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c\"" Oct 8 20:04:28.567324 containerd[1559]: time="2024-10-08T20:04:28.567282078Z" level=info msg="StartContainer for \"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c\"" Oct 8 20:04:28.612688 containerd[1559]: time="2024-10-08T20:04:28.612638151Z" level=info msg="StartContainer for \"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c\" returns successfully" Oct 8 20:04:28.798247 kubelet[2716]: I1008 20:04:28.798210 2716 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 20:04:28.822536 kubelet[2716]: I1008 20:04:28.821702 2716 topology_manager.go:215] "Topology Admit Handler" podUID="d094642f-7c99-4e35-a4ab-95d478921fac" podNamespace="kube-system" podName="coredns-76f75df574-dcs68" Oct 8 20:04:28.822536 kubelet[2716]: I1008 20:04:28.821994 2716 topology_manager.go:215] "Topology Admit Handler" podUID="e2faf741-0266-4f55-907e-f24bdb1678eb" podNamespace="kube-system" podName="coredns-76f75df574-9tp2w" Oct 8 20:04:28.991792 kubelet[2716]: I1008 20:04:28.991584 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7gsr\" (UniqueName: \"kubernetes.io/projected/d094642f-7c99-4e35-a4ab-95d478921fac-kube-api-access-r7gsr\") pod \"coredns-76f75df574-dcs68\" (UID: \"d094642f-7c99-4e35-a4ab-95d478921fac\") " pod="kube-system/coredns-76f75df574-dcs68" Oct 8 20:04:28.991792 kubelet[2716]: I1008 20:04:28.991634 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d094642f-7c99-4e35-a4ab-95d478921fac-config-volume\") pod \"coredns-76f75df574-dcs68\" (UID: \"d094642f-7c99-4e35-a4ab-95d478921fac\") " pod="kube-system/coredns-76f75df574-dcs68" Oct 8 20:04:28.991792 kubelet[2716]: I1008 20:04:28.991656 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk2ln\" (UniqueName: \"kubernetes.io/projected/e2faf741-0266-4f55-907e-f24bdb1678eb-kube-api-access-xk2ln\") pod \"coredns-76f75df574-9tp2w\" (UID: \"e2faf741-0266-4f55-907e-f24bdb1678eb\") " pod="kube-system/coredns-76f75df574-9tp2w" Oct 8 20:04:28.991792 kubelet[2716]: I1008 20:04:28.991680 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2faf741-0266-4f55-907e-f24bdb1678eb-config-volume\") pod \"coredns-76f75df574-9tp2w\" (UID: \"e2faf741-0266-4f55-907e-f24bdb1678eb\") " pod="kube-system/coredns-76f75df574-9tp2w" Oct 8 20:04:29.137399 kubelet[2716]: E1008 20:04:29.137092 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:29.137399 kubelet[2716]: E1008 20:04:29.137242 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:29.138940 containerd[1559]: time="2024-10-08T20:04:29.138580872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dcs68,Uid:d094642f-7c99-4e35-a4ab-95d478921fac,Namespace:kube-system,Attempt:0,}" Oct 8 20:04:29.139507 containerd[1559]: time="2024-10-08T20:04:29.139315790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9tp2w,Uid:e2faf741-0266-4f55-907e-f24bdb1678eb,Namespace:kube-system,Attempt:0,}" Oct 8 20:04:29.553874 kubelet[2716]: E1008 20:04:29.553822 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:30.556073 kubelet[2716]: E1008 20:04:30.556031 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:30.720828 systemd-networkd[1237]: cilium_host: Link UP Oct 8 20:04:30.720954 systemd-networkd[1237]: cilium_net: Link UP Oct 8 20:04:30.720957 systemd-networkd[1237]: cilium_net: Gained carrier Oct 8 20:04:30.722998 systemd-networkd[1237]: cilium_host: Gained carrier Oct 8 20:04:30.723205 systemd-networkd[1237]: cilium_host: Gained IPv6LL Oct 8 20:04:30.792894 systemd-networkd[1237]: cilium_vxlan: Link UP Oct 8 20:04:30.792902 systemd-networkd[1237]: cilium_vxlan: Gained carrier Oct 8 20:04:31.087144 kernel: NET: Registered PF_ALG protocol family Oct 8 20:04:31.557889 kubelet[2716]: E1008 20:04:31.557846 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:31.636823 systemd-networkd[1237]: cilium_net: Gained IPv6LL Oct 8 20:04:31.642433 systemd-networkd[1237]: lxc_health: Link UP Oct 8 20:04:31.649634 systemd-networkd[1237]: lxc_health: Gained carrier Oct 8 20:04:31.767251 kubelet[2716]: I1008 20:04:31.767205 2716 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-65mkw" podStartSLOduration=8.165299589 podStartE2EDuration="12.767164819s" podCreationTimestamp="2024-10-08 20:04:19 +0000 UTC" firstStartedPulling="2024-10-08 20:04:19.808884862 +0000 UTC m=+15.458727407" lastFinishedPulling="2024-10-08 20:04:24.410750052 +0000 UTC m=+20.060592637" observedRunningTime="2024-10-08 20:04:29.569536147 +0000 UTC m=+25.219378732" watchObservedRunningTime="2024-10-08 20:04:31.767164819 +0000 UTC m=+27.417007404" Oct 8 20:04:31.782948 systemd-networkd[1237]: lxc47435113eb1f: Link UP Oct 8 20:04:31.797280 systemd-networkd[1237]: lxc9d62d1e5a7f0: Link UP Oct 8 20:04:31.805295 kernel: eth0: renamed from tmpc935a Oct 8 20:04:31.820231 kernel: eth0: renamed from tmpe6f16 Oct 8 20:04:31.823511 systemd-networkd[1237]: lxc9d62d1e5a7f0: Gained carrier Oct 8 20:04:31.829644 systemd-networkd[1237]: lxc47435113eb1f: Gained carrier Oct 8 20:04:32.531255 systemd-networkd[1237]: cilium_vxlan: Gained IPv6LL Oct 8 20:04:32.560095 kubelet[2716]: E1008 20:04:32.560061 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:32.915265 systemd-networkd[1237]: lxc47435113eb1f: Gained IPv6LL Oct 8 20:04:32.915541 systemd-networkd[1237]: lxc9d62d1e5a7f0: Gained IPv6LL Oct 8 20:04:33.300240 systemd-networkd[1237]: lxc_health: Gained IPv6LL Oct 8 20:04:35.279664 containerd[1559]: time="2024-10-08T20:04:35.279593015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:35.279664 containerd[1559]: time="2024-10-08T20:04:35.279638335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:35.279664 containerd[1559]: time="2024-10-08T20:04:35.279649095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:35.280241 containerd[1559]: time="2024-10-08T20:04:35.279726775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:35.286570 containerd[1559]: time="2024-10-08T20:04:35.285346005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:35.286570 containerd[1559]: time="2024-10-08T20:04:35.285429605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:35.286570 containerd[1559]: time="2024-10-08T20:04:35.285443845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:35.286570 containerd[1559]: time="2024-10-08T20:04:35.285862124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:35.306301 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 20:04:35.307346 systemd-resolved[1454]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 20:04:35.333903 containerd[1559]: time="2024-10-08T20:04:35.333812279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dcs68,Uid:d094642f-7c99-4e35-a4ab-95d478921fac,Namespace:kube-system,Attempt:0,} returns sandbox id \"c935ac134452647a162cff6ab853ab6538c543d66194c2812f76c123595273a9\"" Oct 8 20:04:35.334017 containerd[1559]: time="2024-10-08T20:04:35.333865279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9tp2w,Uid:e2faf741-0266-4f55-907e-f24bdb1678eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6f161caa182ac614ec91934df65be8e9bbdeed92e506d2a2417db405e7493d0\"" Oct 8 20:04:35.336540 kubelet[2716]: E1008 20:04:35.336349 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:35.337093 kubelet[2716]: E1008 20:04:35.336927 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:35.339015 containerd[1559]: time="2024-10-08T20:04:35.338980710Z" level=info msg="CreateContainer within sandbox \"c935ac134452647a162cff6ab853ab6538c543d66194c2812f76c123595273a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:04:35.340140 containerd[1559]: time="2024-10-08T20:04:35.339537469Z" level=info msg="CreateContainer within sandbox \"e6f161caa182ac614ec91934df65be8e9bbdeed92e506d2a2417db405e7493d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:04:35.360999 containerd[1559]: time="2024-10-08T20:04:35.360874511Z" level=info msg="CreateContainer within sandbox \"e6f161caa182ac614ec91934df65be8e9bbdeed92e506d2a2417db405e7493d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"36fb74f2d246effa5629a75202d817fe3f6926e113bb5c88d0d0e7376b215a3f\"" Oct 8 20:04:35.361537 containerd[1559]: time="2024-10-08T20:04:35.361499270Z" level=info msg="StartContainer for \"36fb74f2d246effa5629a75202d817fe3f6926e113bb5c88d0d0e7376b215a3f\"" Oct 8 20:04:35.368428 containerd[1559]: time="2024-10-08T20:04:35.368388658Z" level=info msg="CreateContainer within sandbox \"c935ac134452647a162cff6ab853ab6538c543d66194c2812f76c123595273a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b96bdc6fde7ddc886fba554428a9a44ea3552cc55ef11cd64554c808c2b7302c\"" Oct 8 20:04:35.368861 containerd[1559]: time="2024-10-08T20:04:35.368838257Z" level=info msg="StartContainer for \"b96bdc6fde7ddc886fba554428a9a44ea3552cc55ef11cd64554c808c2b7302c\"" Oct 8 20:04:35.412137 containerd[1559]: time="2024-10-08T20:04:35.412085700Z" level=info msg="StartContainer for \"36fb74f2d246effa5629a75202d817fe3f6926e113bb5c88d0d0e7376b215a3f\" returns successfully" Oct 8 20:04:35.417248 containerd[1559]: time="2024-10-08T20:04:35.414630096Z" level=info msg="StartContainer for \"b96bdc6fde7ddc886fba554428a9a44ea3552cc55ef11cd64554c808c2b7302c\" returns successfully" Oct 8 20:04:35.582400 kubelet[2716]: E1008 20:04:35.574573 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:35.586083 kubelet[2716]: E1008 20:04:35.584677 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:35.606980 kubelet[2716]: I1008 20:04:35.606414 2716 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9tp2w" podStartSLOduration=16.606379116 podStartE2EDuration="16.606379116s" podCreationTimestamp="2024-10-08 20:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:35.590447584 +0000 UTC m=+31.240290169" watchObservedRunningTime="2024-10-08 20:04:35.606379116 +0000 UTC m=+31.256221701" Oct 8 20:04:35.805373 systemd[1]: Started sshd@7-10.0.0.154:22-10.0.0.1:51170.service - OpenSSH per-connection server daemon (10.0.0.1:51170). Oct 8 20:04:35.844048 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 51170 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:04:35.845402 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:35.849405 systemd-logind[1544]: New session 8 of user core. Oct 8 20:04:35.859499 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 20:04:35.986201 sshd[4114]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:35.989468 systemd[1]: sshd@7-10.0.0.154:22-10.0.0.1:51170.service: Deactivated successfully. Oct 8 20:04:35.991341 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 20:04:35.991481 systemd-logind[1544]: Session 8 logged out. Waiting for processes to exit. Oct 8 20:04:35.994029 systemd-logind[1544]: Removed session 8. Oct 8 20:04:36.284514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3288885812.mount: Deactivated successfully. Oct 8 20:04:36.586383 kubelet[2716]: E1008 20:04:36.586161 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:36.589333 kubelet[2716]: E1008 20:04:36.589000 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:36.597939 kubelet[2716]: I1008 20:04:36.597895 2716 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-dcs68" podStartSLOduration=17.597858786 podStartE2EDuration="17.597858786s" podCreationTimestamp="2024-10-08 20:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:35.607670874 +0000 UTC m=+31.257513459" watchObservedRunningTime="2024-10-08 20:04:36.597858786 +0000 UTC m=+32.247701371" Oct 8 20:04:37.588614 kubelet[2716]: E1008 20:04:37.588239 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:37.588614 kubelet[2716]: E1008 20:04:37.588290 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:40.999377 systemd[1]: Started sshd@8-10.0.0.154:22-10.0.0.1:51184.service - OpenSSH per-connection server daemon (10.0.0.1:51184). Oct 8 20:04:41.038669 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 51184 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:04:41.040030 sshd[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:41.044333 systemd-logind[1544]: New session 9 of user core. Oct 8 20:04:41.055485 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 20:04:41.169088 sshd[4136]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:41.173321 systemd[1]: sshd@8-10.0.0.154:22-10.0.0.1:51184.service: Deactivated successfully. Oct 8 20:04:41.176497 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 20:04:41.176531 systemd-logind[1544]: Session 9 logged out. Waiting for processes to exit. Oct 8 20:04:41.178064 systemd-logind[1544]: Removed session 9. Oct 8 20:04:45.045779 kubelet[2716]: I1008 20:04:45.045319 2716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:04:45.046163 kubelet[2716]: E1008 20:04:45.046035 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:45.610538 kubelet[2716]: E1008 20:04:45.609922 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:04:46.185338 systemd[1]: Started sshd@9-10.0.0.154:22-10.0.0.1:34126.service - OpenSSH per-connection server daemon (10.0.0.1:34126). Oct 8 20:04:46.218724 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 34126 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:04:46.219226 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:46.223727 systemd-logind[1544]: New session 10 of user core. Oct 8 20:04:46.229414 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 20:04:46.338501 sshd[4153]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:46.347350 systemd[1]: Started sshd@10-10.0.0.154:22-10.0.0.1:34130.service - OpenSSH per-connection server daemon (10.0.0.1:34130). Oct 8 20:04:46.348093 systemd[1]: sshd@9-10.0.0.154:22-10.0.0.1:34126.service: Deactivated successfully. Oct 8 20:04:46.349633 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 20:04:46.351146 systemd-logind[1544]: Session 10 logged out. Waiting for processes to exit. Oct 8 20:04:46.352236 systemd-logind[1544]: Removed session 10. Oct 8 20:04:46.379946 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 34130 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:04:46.381228 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:46.385575 systemd-logind[1544]: New session 11 of user core. Oct 8 20:04:46.394517 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 20:04:46.563393 sshd[4167]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:46.573473 systemd[1]: Started sshd@11-10.0.0.154:22-10.0.0.1:34134.service - OpenSSH per-connection server daemon (10.0.0.1:34134). Oct 8 20:04:46.573864 systemd[1]: sshd@10-10.0.0.154:22-10.0.0.1:34130.service: Deactivated successfully. Oct 8 20:04:46.576591 systemd-logind[1544]: Session 11 logged out. Waiting for processes to exit. Oct 8 20:04:46.580078 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 20:04:46.582084 systemd-logind[1544]: Removed session 11. Oct 8 20:04:46.619223 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 34134 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:04:46.620479 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:46.628390 systemd-logind[1544]: New session 12 of user core. Oct 8 20:04:46.636593 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 20:04:46.758278 sshd[4180]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:46.763129 systemd[1]: sshd@11-10.0.0.154:22-10.0.0.1:34134.service: Deactivated successfully. Oct 8 20:04:46.765296 systemd-logind[1544]: Session 12 logged out. Waiting for processes to exit. Oct 8 20:04:46.765510 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 20:04:46.766670 systemd-logind[1544]: Removed session 12. Oct 8 20:04:51.770424 systemd[1]: Started sshd@12-10.0.0.154:22-10.0.0.1:34146.service - OpenSSH per-connection server daemon (10.0.0.1:34146). Oct 8 20:04:51.802494 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 34146 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:04:51.804107 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:51.810103 systemd-logind[1544]: New session 13 of user core. Oct 8 20:04:51.815398 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 20:04:51.927150 sshd[4200]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:51.931236 systemd[1]: sshd@12-10.0.0.154:22-10.0.0.1:34146.service: Deactivated successfully. Oct 8 20:04:51.933457 systemd-logind[1544]: Session 13 logged out. Waiting for processes to exit. Oct 8 20:04:51.933554 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 20:04:51.934649 systemd-logind[1544]: Removed session 13. Oct 8 20:04:56.938340 systemd[1]: Started sshd@13-10.0.0.154:22-10.0.0.1:45688.service - OpenSSH per-connection server daemon (10.0.0.1:45688). Oct 8 20:04:56.975602 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 45688 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:04:56.976074 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:56.980182 systemd-logind[1544]: New session 14 of user core. Oct 8 20:04:56.994348 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 20:04:57.121167 sshd[4215]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:57.130334 systemd[1]: Started sshd@14-10.0.0.154:22-10.0.0.1:45700.service - OpenSSH per-connection server daemon (10.0.0.1:45700). Oct 8 20:04:57.131045 systemd[1]: sshd@13-10.0.0.154:22-10.0.0.1:45688.service: Deactivated successfully. Oct 8 20:04:57.132500 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 20:04:57.138013 systemd-logind[1544]: Session 14 logged out. Waiting for processes to exit. Oct 8 20:04:57.139142 systemd-logind[1544]: Removed session 14. Oct 8 20:04:57.167718 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 45700 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:04:57.168885 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:57.172771 systemd-logind[1544]: New session 15 of user core. Oct 8 20:04:57.180323 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 20:04:57.388641 sshd[4228]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:57.401364 systemd[1]: Started sshd@15-10.0.0.154:22-10.0.0.1:45704.service - OpenSSH per-connection server daemon (10.0.0.1:45704). Oct 8 20:04:57.401802 systemd[1]: sshd@14-10.0.0.154:22-10.0.0.1:45700.service: Deactivated successfully. Oct 8 20:04:57.403580 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 20:04:57.405607 systemd-logind[1544]: Session 15 logged out. Waiting for processes to exit. Oct 8 20:04:57.406606 systemd-logind[1544]: Removed session 15. Oct 8 20:04:57.441819 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 45704 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:04:57.443077 sshd[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:57.447324 systemd-logind[1544]: New session 16 of user core. Oct 8 20:04:57.455347 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 20:04:58.701295 sshd[4241]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:58.715524 systemd[1]: Started sshd@16-10.0.0.154:22-10.0.0.1:45716.service - OpenSSH per-connection server daemon (10.0.0.1:45716). Oct 8 20:04:58.717923 systemd[1]: sshd@15-10.0.0.154:22-10.0.0.1:45704.service: Deactivated successfully. Oct 8 20:04:58.719557 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 20:04:58.722435 systemd-logind[1544]: Session 16 logged out. Waiting for processes to exit. Oct 8 20:04:58.723480 systemd-logind[1544]: Removed session 16. Oct 8 20:04:58.754933 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 45716 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:04:58.754262 sshd[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:58.759294 systemd-logind[1544]: New session 17 of user core. Oct 8 20:04:58.764330 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 20:04:58.979994 sshd[4264]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:58.991526 systemd[1]: Started sshd@17-10.0.0.154:22-10.0.0.1:45726.service - OpenSSH per-connection server daemon (10.0.0.1:45726). Oct 8 20:04:58.992444 systemd[1]: sshd@16-10.0.0.154:22-10.0.0.1:45716.service: Deactivated successfully. Oct 8 20:04:58.995688 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 20:04:58.996406 systemd-logind[1544]: Session 17 logged out. Waiting for processes to exit. Oct 8 20:04:58.997874 systemd-logind[1544]: Removed session 17. Oct 8 20:04:59.023821 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 45726 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:04:59.025109 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:59.028887 systemd-logind[1544]: New session 18 of user core. Oct 8 20:04:59.040470 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 20:04:59.151820 sshd[4277]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:59.155746 systemd[1]: sshd@17-10.0.0.154:22-10.0.0.1:45726.service: Deactivated successfully. Oct 8 20:04:59.157893 systemd-logind[1544]: Session 18 logged out. Waiting for processes to exit. Oct 8 20:04:59.158471 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 20:04:59.159639 systemd-logind[1544]: Removed session 18. Oct 8 20:05:04.169410 systemd[1]: Started sshd@18-10.0.0.154:22-10.0.0.1:34376.service - OpenSSH per-connection server daemon (10.0.0.1:34376). Oct 8 20:05:04.202649 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 34376 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:05:04.203765 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:04.207654 systemd-logind[1544]: New session 19 of user core. Oct 8 20:05:04.218345 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 20:05:04.324702 sshd[4295]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:04.327465 systemd-logind[1544]: Session 19 logged out. Waiting for processes to exit. Oct 8 20:05:04.327663 systemd[1]: sshd@18-10.0.0.154:22-10.0.0.1:34376.service: Deactivated successfully. Oct 8 20:05:04.330067 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 20:05:04.330909 systemd-logind[1544]: Removed session 19. Oct 8 20:05:09.341335 systemd[1]: Started sshd@19-10.0.0.154:22-10.0.0.1:34378.service - OpenSSH per-connection server daemon (10.0.0.1:34378). Oct 8 20:05:09.373862 sshd[4315]: Accepted publickey for core from 10.0.0.1 port 34378 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:05:09.374346 sshd[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:09.378049 systemd-logind[1544]: New session 20 of user core. Oct 8 20:05:09.385317 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 20:05:09.488975 sshd[4315]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:09.492598 systemd[1]: sshd@19-10.0.0.154:22-10.0.0.1:34378.service: Deactivated successfully. Oct 8 20:05:09.494833 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 20:05:09.497248 systemd-logind[1544]: Session 20 logged out. Waiting for processes to exit. Oct 8 20:05:09.498444 systemd-logind[1544]: Removed session 20. Oct 8 20:05:14.500339 systemd[1]: Started sshd@20-10.0.0.154:22-10.0.0.1:40930.service - OpenSSH per-connection server daemon (10.0.0.1:40930). Oct 8 20:05:14.532412 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 40930 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:05:14.533590 sshd[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:14.537499 systemd-logind[1544]: New session 21 of user core. Oct 8 20:05:14.546345 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 20:05:14.650742 sshd[4330]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:14.656985 systemd[1]: sshd@20-10.0.0.154:22-10.0.0.1:40930.service: Deactivated successfully. Oct 8 20:05:14.657660 systemd-logind[1544]: Session 21 logged out. Waiting for processes to exit. Oct 8 20:05:14.660949 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 20:05:14.663722 systemd-logind[1544]: Removed session 21. Oct 8 20:05:19.660373 systemd[1]: Started sshd@21-10.0.0.154:22-10.0.0.1:40946.service - OpenSSH per-connection server daemon (10.0.0.1:40946). Oct 8 20:05:19.692581 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 40946 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:05:19.693831 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:19.697161 systemd-logind[1544]: New session 22 of user core. Oct 8 20:05:19.713460 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 20:05:19.818790 sshd[4345]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:19.827320 systemd[1]: Started sshd@22-10.0.0.154:22-10.0.0.1:40956.service - OpenSSH per-connection server daemon (10.0.0.1:40956). Oct 8 20:05:19.827690 systemd[1]: sshd@21-10.0.0.154:22-10.0.0.1:40946.service: Deactivated successfully. Oct 8 20:05:19.830552 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 20:05:19.830777 systemd-logind[1544]: Session 22 logged out. Waiting for processes to exit. Oct 8 20:05:19.832147 systemd-logind[1544]: Removed session 22. Oct 8 20:05:19.862143 sshd[4357]: Accepted publickey for core from 10.0.0.1 port 40956 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:05:19.863347 sshd[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:19.867032 systemd-logind[1544]: New session 23 of user core. Oct 8 20:05:19.881383 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 20:05:21.461495 containerd[1559]: time="2024-10-08T20:05:21.461367772Z" level=info msg="StopContainer for \"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c\" with timeout 30 (s)" Oct 8 20:05:21.462764 containerd[1559]: time="2024-10-08T20:05:21.462591008Z" level=info msg="Stop container \"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c\" with signal terminated" Oct 8 20:05:21.499069 containerd[1559]: time="2024-10-08T20:05:21.499025416Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:05:21.505556 containerd[1559]: time="2024-10-08T20:05:21.505489477Z" level=info msg="StopContainer for \"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c\" with timeout 2 (s)" Oct 8 20:05:21.505824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c-rootfs.mount: Deactivated successfully. Oct 8 20:05:21.506123 containerd[1559]: time="2024-10-08T20:05:21.506051755Z" level=info msg="Stop container \"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c\" with signal terminated" Oct 8 20:05:21.513078 systemd-networkd[1237]: lxc_health: Link DOWN Oct 8 20:05:21.513086 systemd-networkd[1237]: lxc_health: Lost carrier Oct 8 20:05:21.516358 containerd[1559]: time="2024-10-08T20:05:21.515518126Z" level=info msg="shim disconnected" id=9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c namespace=k8s.io Oct 8 20:05:21.516449 containerd[1559]: time="2024-10-08T20:05:21.516358843Z" level=warning msg="cleaning up after shim disconnected" id=9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c namespace=k8s.io Oct 8 20:05:21.516449 containerd[1559]: time="2024-10-08T20:05:21.516370523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:05:21.549718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c-rootfs.mount: Deactivated successfully. Oct 8 20:05:21.556100 containerd[1559]: time="2024-10-08T20:05:21.556035041Z" level=info msg="shim disconnected" id=afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c namespace=k8s.io Oct 8 20:05:21.556100 containerd[1559]: time="2024-10-08T20:05:21.556096481Z" level=warning msg="cleaning up after shim disconnected" id=afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c namespace=k8s.io Oct 8 20:05:21.556100 containerd[1559]: time="2024-10-08T20:05:21.556105361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:05:21.565099 containerd[1559]: time="2024-10-08T20:05:21.564964414Z" level=info msg="StopContainer for \"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c\" returns successfully" Oct 8 20:05:21.565781 containerd[1559]: time="2024-10-08T20:05:21.565753171Z" level=info msg="StopPodSandbox for \"5474aec4d189d5be1043604334c718a169fa538b8b9514d3059e2a2859e16877\"" Oct 8 20:05:21.565830 containerd[1559]: time="2024-10-08T20:05:21.565796771Z" level=info msg="Container to stop \"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:05:21.567803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5474aec4d189d5be1043604334c718a169fa538b8b9514d3059e2a2859e16877-shm.mount: Deactivated successfully. Oct 8 20:05:21.574952 containerd[1559]: time="2024-10-08T20:05:21.574038906Z" level=info msg="StopContainer for \"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c\" returns successfully" Oct 8 20:05:21.575512 containerd[1559]: time="2024-10-08T20:05:21.575474501Z" level=info msg="StopPodSandbox for \"a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0\"" Oct 8 20:05:21.575576 containerd[1559]: time="2024-10-08T20:05:21.575521781Z" level=info msg="Container to stop \"ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:05:21.575576 containerd[1559]: time="2024-10-08T20:05:21.575535581Z" level=info msg="Container to stop \"27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:05:21.575576 containerd[1559]: time="2024-10-08T20:05:21.575546341Z" level=info msg="Container to stop \"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:05:21.575576 containerd[1559]: time="2024-10-08T20:05:21.575556461Z" level=info msg="Container to stop \"6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:05:21.575576 containerd[1559]: time="2024-10-08T20:05:21.575565781Z" level=info msg="Container to stop \"430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 20:05:21.577356 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0-shm.mount: Deactivated successfully. Oct 8 20:05:21.601258 containerd[1559]: time="2024-10-08T20:05:21.601203622Z" level=info msg="shim disconnected" id=5474aec4d189d5be1043604334c718a169fa538b8b9514d3059e2a2859e16877 namespace=k8s.io Oct 8 20:05:21.601452 containerd[1559]: time="2024-10-08T20:05:21.601436221Z" level=warning msg="cleaning up after shim disconnected" id=5474aec4d189d5be1043604334c718a169fa538b8b9514d3059e2a2859e16877 namespace=k8s.io Oct 8 20:05:21.601519 containerd[1559]: time="2024-10-08T20:05:21.601506021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:05:21.607367 containerd[1559]: time="2024-10-08T20:05:21.607304363Z" level=info msg="shim disconnected" id=a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0 namespace=k8s.io Oct 8 20:05:21.607367 containerd[1559]: time="2024-10-08T20:05:21.607364923Z" level=warning msg="cleaning up after shim disconnected" id=a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0 namespace=k8s.io Oct 8 20:05:21.607527 containerd[1559]: time="2024-10-08T20:05:21.607379483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:05:21.623317 containerd[1559]: time="2024-10-08T20:05:21.622624036Z" level=info msg="TearDown network for sandbox \"5474aec4d189d5be1043604334c718a169fa538b8b9514d3059e2a2859e16877\" successfully" Oct 8 20:05:21.623506 containerd[1559]: time="2024-10-08T20:05:21.623482234Z" level=info msg="StopPodSandbox for \"5474aec4d189d5be1043604334c718a169fa538b8b9514d3059e2a2859e16877\" returns successfully" Oct 8 20:05:21.630654 containerd[1559]: time="2024-10-08T20:05:21.630611732Z" level=info msg="TearDown network for sandbox \"a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0\" successfully" Oct 8 20:05:21.630865 containerd[1559]: time="2024-10-08T20:05:21.630747451Z" level=info msg="StopPodSandbox for \"a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0\" returns successfully" Oct 8 20:05:21.676259 kubelet[2716]: I1008 20:05:21.676219 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-cilium-run\") pod \"d783fb45-9f0c-4534-801b-f71cbf6beb35\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " Oct 8 20:05:21.676991 kubelet[2716]: I1008 20:05:21.676749 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-cilium-cgroup\") pod \"d783fb45-9f0c-4534-801b-f71cbf6beb35\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " Oct 8 20:05:21.676991 kubelet[2716]: I1008 20:05:21.676784 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-host-proc-sys-kernel\") pod \"d783fb45-9f0c-4534-801b-f71cbf6beb35\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " Oct 8 20:05:21.676991 kubelet[2716]: I1008 20:05:21.676806 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-hostproc\") pod \"d783fb45-9f0c-4534-801b-f71cbf6beb35\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " Oct 8 20:05:21.676991 kubelet[2716]: I1008 20:05:21.676849 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d783fb45-9f0c-4534-801b-f71cbf6beb35-hubble-tls\") pod \"d783fb45-9f0c-4534-801b-f71cbf6beb35\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " Oct 8 20:05:21.680134 kubelet[2716]: I1008 20:05:21.679860 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d783fb45-9f0c-4534-801b-f71cbf6beb35" (UID: "d783fb45-9f0c-4534-801b-f71cbf6beb35"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:05:21.680134 kubelet[2716]: I1008 20:05:21.679873 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d783fb45-9f0c-4534-801b-f71cbf6beb35" (UID: "d783fb45-9f0c-4534-801b-f71cbf6beb35"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:05:21.680134 kubelet[2716]: I1008 20:05:21.679906 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-hostproc" (OuterVolumeSpecName: "hostproc") pod "d783fb45-9f0c-4534-801b-f71cbf6beb35" (UID: "d783fb45-9f0c-4534-801b-f71cbf6beb35"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:05:21.680987 kubelet[2716]: I1008 20:05:21.680921 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d783fb45-9f0c-4534-801b-f71cbf6beb35" (UID: "d783fb45-9f0c-4534-801b-f71cbf6beb35"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:05:21.685878 kubelet[2716]: I1008 20:05:21.685833 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d783fb45-9f0c-4534-801b-f71cbf6beb35-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d783fb45-9f0c-4534-801b-f71cbf6beb35" (UID: "d783fb45-9f0c-4534-801b-f71cbf6beb35"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:05:21.695462 kubelet[2716]: I1008 20:05:21.695267 2716 scope.go:117] "RemoveContainer" containerID="9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c" Oct 8 20:05:21.710977 containerd[1559]: time="2024-10-08T20:05:21.710937125Z" level=info msg="RemoveContainer for \"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c\"" Oct 8 20:05:21.719486 containerd[1559]: time="2024-10-08T20:05:21.719392419Z" level=info msg="RemoveContainer for \"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c\" returns successfully" Oct 8 20:05:21.720187 kubelet[2716]: I1008 20:05:21.720159 2716 scope.go:117] "RemoveContainer" containerID="9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c" Oct 8 20:05:21.720392 containerd[1559]: time="2024-10-08T20:05:21.720354816Z" level=error msg="ContainerStatus for \"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c\": not found" Oct 8 20:05:21.730018 kubelet[2716]: E1008 20:05:21.729992 2716 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c\": not found" containerID="9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c" Oct 8 20:05:21.733077 kubelet[2716]: I1008 20:05:21.733043 2716 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c"} err="failed to get container status \"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ad9fd1fea6876df61124ebdccd205bac4108e4ccc29a22c903aaea34464d42c\": not found" Oct 8 20:05:21.733141 kubelet[2716]: I1008 20:05:21.733082 2716 scope.go:117] "RemoveContainer" containerID="afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c" Oct 8 20:05:21.734084 containerd[1559]: time="2024-10-08T20:05:21.734057574Z" level=info msg="RemoveContainer for \"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c\"" Oct 8 20:05:21.736626 containerd[1559]: time="2024-10-08T20:05:21.736586966Z" level=info msg="RemoveContainer for \"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c\" returns successfully" Oct 8 20:05:21.736776 kubelet[2716]: I1008 20:05:21.736745 2716 scope.go:117] "RemoveContainer" containerID="27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f" Oct 8 20:05:21.737634 containerd[1559]: time="2024-10-08T20:05:21.737610923Z" level=info msg="RemoveContainer for \"27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f\"" Oct 8 20:05:21.739973 containerd[1559]: time="2024-10-08T20:05:21.739934315Z" level=info msg="RemoveContainer for \"27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f\" returns successfully" Oct 8 20:05:21.740092 kubelet[2716]: I1008 20:05:21.740060 2716 scope.go:117] "RemoveContainer" containerID="ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45" Oct 8 20:05:21.740945 containerd[1559]: time="2024-10-08T20:05:21.740924952Z" level=info msg="RemoveContainer for \"ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45\"" Oct 8 20:05:21.743172 containerd[1559]: time="2024-10-08T20:05:21.743137706Z" level=info msg="RemoveContainer for \"ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45\" returns successfully" Oct 8 20:05:21.743329 kubelet[2716]: I1008 20:05:21.743295 2716 scope.go:117] "RemoveContainer" containerID="430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72" Oct 8 20:05:21.744186 containerd[1559]: time="2024-10-08T20:05:21.744167862Z" level=info msg="RemoveContainer for \"430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72\"" Oct 8 20:05:21.746540 containerd[1559]: time="2024-10-08T20:05:21.746509575Z" level=info msg="RemoveContainer for \"430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72\" returns successfully" Oct 8 20:05:21.746718 kubelet[2716]: I1008 20:05:21.746674 2716 scope.go:117] "RemoveContainer" containerID="6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28" Oct 8 20:05:21.747590 containerd[1559]: time="2024-10-08T20:05:21.747560092Z" level=info msg="RemoveContainer for \"6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28\"" Oct 8 20:05:21.749735 containerd[1559]: time="2024-10-08T20:05:21.749703485Z" level=info msg="RemoveContainer for \"6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28\" returns successfully" Oct 8 20:05:21.749936 kubelet[2716]: I1008 20:05:21.749839 2716 scope.go:117] "RemoveContainer" containerID="afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c" Oct 8 20:05:21.750283 containerd[1559]: time="2024-10-08T20:05:21.750190684Z" level=error msg="ContainerStatus for \"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c\": not found" Oct 8 20:05:21.750361 kubelet[2716]: E1008 20:05:21.750318 2716 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c\": not found" containerID="afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c" Oct 8 20:05:21.750361 kubelet[2716]: I1008 20:05:21.750346 2716 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c"} err="failed to get container status \"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c\": rpc error: code = NotFound desc = an error occurred when try to find container \"afdc0355e245c81e90c5ae6a5140960f40ee0f410d139420bdee8c181d5e3f0c\": not found" Oct 8 20:05:21.750361 kubelet[2716]: I1008 20:05:21.750356 2716 scope.go:117] "RemoveContainer" containerID="27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f" Oct 8 20:05:21.750522 containerd[1559]: time="2024-10-08T20:05:21.750488443Z" level=error msg="ContainerStatus for \"27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f\": not found" Oct 8 20:05:21.750624 kubelet[2716]: E1008 20:05:21.750599 2716 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f\": not found" containerID="27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f" Oct 8 20:05:21.750624 kubelet[2716]: I1008 20:05:21.750621 2716 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f"} err="failed to get container status \"27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f\": rpc error: code = NotFound desc = an error occurred when try to find container \"27467c904b948011e835e503b8eae2b69892d3a7ee092df134181d604754a72f\": not found" Oct 8 20:05:21.750698 kubelet[2716]: I1008 20:05:21.750633 2716 scope.go:117] "RemoveContainer" containerID="ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45" Oct 8 20:05:21.750860 containerd[1559]: time="2024-10-08T20:05:21.750769442Z" level=error msg="ContainerStatus for \"ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45\": not found" Oct 8 20:05:21.750898 kubelet[2716]: E1008 20:05:21.750858 2716 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45\": not found" containerID="ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45" Oct 8 20:05:21.750898 kubelet[2716]: I1008 20:05:21.750877 2716 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45"} err="failed to get container status \"ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45\": rpc error: code = NotFound desc = an error occurred when try to find container \"ffaf95645943f231f2c137eff44d10c3020acb8af318679d0248629e10045f45\": not found" Oct 8 20:05:21.750898 kubelet[2716]: I1008 20:05:21.750885 2716 scope.go:117] "RemoveContainer" containerID="430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72" Oct 8 20:05:21.751455 containerd[1559]: time="2024-10-08T20:05:21.751376720Z" level=error msg="ContainerStatus for \"430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72\": not found" Oct 8 20:05:21.751969 kubelet[2716]: E1008 20:05:21.751696 2716 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72\": not found" containerID="430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72" Oct 8 20:05:21.751969 kubelet[2716]: I1008 20:05:21.751727 2716 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72"} err="failed to get container status \"430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72\": rpc error: code = NotFound desc = an error occurred when try to find container \"430ca0ff4c1c4d9395a2498df1cd8701b7d5391e66753c3fc4d3416c3928bc72\": not found" Oct 8 20:05:21.751969 kubelet[2716]: I1008 20:05:21.751737 2716 scope.go:117] "RemoveContainer" containerID="6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28" Oct 8 20:05:21.752082 containerd[1559]: time="2024-10-08T20:05:21.751911919Z" level=error msg="ContainerStatus for \"6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28\": not found" Oct 8 20:05:21.752320 kubelet[2716]: E1008 20:05:21.752230 2716 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28\": not found" containerID="6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28" Oct 8 20:05:21.752320 kubelet[2716]: I1008 20:05:21.752285 2716 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28"} err="failed to get container status \"6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e03819df22cf55cd9c6cb061be94d781dc70ff9f2438c2f07e1b35b60286a28\": not found" Oct 8 20:05:21.777508 kubelet[2716]: I1008 20:05:21.777475 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d706d7f-20c9-4e49-a33f-b1bc90644f25-cilium-config-path\") pod \"4d706d7f-20c9-4e49-a33f-b1bc90644f25\" (UID: \"4d706d7f-20c9-4e49-a33f-b1bc90644f25\") " Oct 8 20:05:21.777798 kubelet[2716]: I1008 20:05:21.777668 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-etc-cni-netd\") pod \"d783fb45-9f0c-4534-801b-f71cbf6beb35\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " Oct 8 20:05:21.777798 kubelet[2716]: I1008 20:05:21.777696 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-xtables-lock\") pod \"d783fb45-9f0c-4534-801b-f71cbf6beb35\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " Oct 8 20:05:21.777798 kubelet[2716]: I1008 20:05:21.777748 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d783fb45-9f0c-4534-801b-f71cbf6beb35-clustermesh-secrets\") pod \"d783fb45-9f0c-4534-801b-f71cbf6beb35\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " Oct 8 20:05:21.777798 kubelet[2716]: I1008 20:05:21.777771 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-bpf-maps\") pod \"d783fb45-9f0c-4534-801b-f71cbf6beb35\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " Oct 8 20:05:21.777798 kubelet[2716]: I1008 20:05:21.777776 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d783fb45-9f0c-4534-801b-f71cbf6beb35" (UID: "d783fb45-9f0c-4534-801b-f71cbf6beb35"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:05:21.777931 kubelet[2716]: I1008 20:05:21.777814 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d783fb45-9f0c-4534-801b-f71cbf6beb35" (UID: "d783fb45-9f0c-4534-801b-f71cbf6beb35"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:05:21.778538 kubelet[2716]: I1008 20:05:21.777972 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lnjf\" (UniqueName: \"kubernetes.io/projected/d783fb45-9f0c-4534-801b-f71cbf6beb35-kube-api-access-8lnjf\") pod \"d783fb45-9f0c-4534-801b-f71cbf6beb35\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " Oct 8 20:05:21.778538 kubelet[2716]: I1008 20:05:21.778000 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-cni-path\") pod \"d783fb45-9f0c-4534-801b-f71cbf6beb35\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " Oct 8 20:05:21.778538 kubelet[2716]: I1008 20:05:21.778022 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkk94\" (UniqueName: \"kubernetes.io/projected/4d706d7f-20c9-4e49-a33f-b1bc90644f25-kube-api-access-qkk94\") pod \"4d706d7f-20c9-4e49-a33f-b1bc90644f25\" (UID: \"4d706d7f-20c9-4e49-a33f-b1bc90644f25\") " Oct 8 20:05:21.778538 kubelet[2716]: I1008 20:05:21.778077 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-host-proc-sys-net\") pod \"d783fb45-9f0c-4534-801b-f71cbf6beb35\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " Oct 8 20:05:21.778538 kubelet[2716]: I1008 20:05:21.778108 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d783fb45-9f0c-4534-801b-f71cbf6beb35-cilium-config-path\") pod \"d783fb45-9f0c-4534-801b-f71cbf6beb35\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " Oct 8 20:05:21.778538 kubelet[2716]: I1008 20:05:21.778143 2716 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-lib-modules\") pod \"d783fb45-9f0c-4534-801b-f71cbf6beb35\" (UID: \"d783fb45-9f0c-4534-801b-f71cbf6beb35\") " Oct 8 20:05:21.778735 kubelet[2716]: I1008 20:05:21.778180 2716 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.778735 kubelet[2716]: I1008 20:05:21.778192 2716 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.778735 kubelet[2716]: I1008 20:05:21.778202 2716 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.778735 kubelet[2716]: I1008 20:05:21.778211 2716 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.778735 kubelet[2716]: I1008 20:05:21.778221 2716 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.778735 kubelet[2716]: I1008 20:05:21.778232 2716 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.778735 kubelet[2716]: I1008 20:05:21.778241 2716 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d783fb45-9f0c-4534-801b-f71cbf6beb35-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.778735 kubelet[2716]: I1008 20:05:21.778268 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d783fb45-9f0c-4534-801b-f71cbf6beb35" (UID: "d783fb45-9f0c-4534-801b-f71cbf6beb35"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:05:21.778901 kubelet[2716]: I1008 20:05:21.778291 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d783fb45-9f0c-4534-801b-f71cbf6beb35" (UID: "d783fb45-9f0c-4534-801b-f71cbf6beb35"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:05:21.779179 kubelet[2716]: I1008 20:05:21.779108 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d783fb45-9f0c-4534-801b-f71cbf6beb35" (UID: "d783fb45-9f0c-4534-801b-f71cbf6beb35"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:05:21.779902 kubelet[2716]: I1008 20:05:21.779863 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d706d7f-20c9-4e49-a33f-b1bc90644f25-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4d706d7f-20c9-4e49-a33f-b1bc90644f25" (UID: "4d706d7f-20c9-4e49-a33f-b1bc90644f25"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 20:05:21.779964 kubelet[2716]: I1008 20:05:21.779921 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-cni-path" (OuterVolumeSpecName: "cni-path") pod "d783fb45-9f0c-4534-801b-f71cbf6beb35" (UID: "d783fb45-9f0c-4534-801b-f71cbf6beb35"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 20:05:21.780618 kubelet[2716]: I1008 20:05:21.780582 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d783fb45-9f0c-4534-801b-f71cbf6beb35-kube-api-access-8lnjf" (OuterVolumeSpecName: "kube-api-access-8lnjf") pod "d783fb45-9f0c-4534-801b-f71cbf6beb35" (UID: "d783fb45-9f0c-4534-801b-f71cbf6beb35"). InnerVolumeSpecName "kube-api-access-8lnjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:05:21.781024 kubelet[2716]: I1008 20:05:21.780999 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d783fb45-9f0c-4534-801b-f71cbf6beb35-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d783fb45-9f0c-4534-801b-f71cbf6beb35" (UID: "d783fb45-9f0c-4534-801b-f71cbf6beb35"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 20:05:21.781270 kubelet[2716]: I1008 20:05:21.781237 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d783fb45-9f0c-4534-801b-f71cbf6beb35-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d783fb45-9f0c-4534-801b-f71cbf6beb35" (UID: "d783fb45-9f0c-4534-801b-f71cbf6beb35"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 20:05:21.781561 kubelet[2716]: I1008 20:05:21.781526 2716 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d706d7f-20c9-4e49-a33f-b1bc90644f25-kube-api-access-qkk94" (OuterVolumeSpecName: "kube-api-access-qkk94") pod "4d706d7f-20c9-4e49-a33f-b1bc90644f25" (UID: "4d706d7f-20c9-4e49-a33f-b1bc90644f25"). InnerVolumeSpecName "kube-api-access-qkk94". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 20:05:21.879205 kubelet[2716]: I1008 20:05:21.879142 2716 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.879205 kubelet[2716]: I1008 20:05:21.879176 2716 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d706d7f-20c9-4e49-a33f-b1bc90644f25-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.879205 kubelet[2716]: I1008 20:05:21.879188 2716 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d783fb45-9f0c-4534-801b-f71cbf6beb35-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.879205 kubelet[2716]: I1008 20:05:21.879198 2716 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.879205 kubelet[2716]: I1008 20:05:21.879209 2716 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qkk94\" (UniqueName: \"kubernetes.io/projected/4d706d7f-20c9-4e49-a33f-b1bc90644f25-kube-api-access-qkk94\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.879205 kubelet[2716]: I1008 20:05:21.879218 2716 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.879205 kubelet[2716]: I1008 20:05:21.879228 2716 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8lnjf\" (UniqueName: \"kubernetes.io/projected/d783fb45-9f0c-4534-801b-f71cbf6beb35-kube-api-access-8lnjf\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.879543 kubelet[2716]: I1008 20:05:21.879239 2716 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d783fb45-9f0c-4534-801b-f71cbf6beb35-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:21.879543 kubelet[2716]: I1008 20:05:21.879248 2716 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d783fb45-9f0c-4534-801b-f71cbf6beb35-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 20:05:22.440621 kubelet[2716]: E1008 20:05:22.440551 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:05:22.441759 kubelet[2716]: I1008 20:05:22.441712 2716 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4d706d7f-20c9-4e49-a33f-b1bc90644f25" path="/var/lib/kubelet/pods/4d706d7f-20c9-4e49-a33f-b1bc90644f25/volumes" Oct 8 20:05:22.442157 kubelet[2716]: I1008 20:05:22.442100 2716 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d783fb45-9f0c-4534-801b-f71cbf6beb35" path="/var/lib/kubelet/pods/d783fb45-9f0c-4534-801b-f71cbf6beb35/volumes" Oct 8 20:05:22.482636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5474aec4d189d5be1043604334c718a169fa538b8b9514d3059e2a2859e16877-rootfs.mount: Deactivated successfully. Oct 8 20:05:22.482788 systemd[1]: var-lib-kubelet-pods-4d706d7f\x2d20c9\x2d4e49\x2da33f\x2db1bc90644f25-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqkk94.mount: Deactivated successfully. Oct 8 20:05:22.482874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5ef9d116c72e496e5ac5c93b23241ef6dbe9dc465cc9990c6ccf821369c64f0-rootfs.mount: Deactivated successfully. Oct 8 20:05:22.482956 systemd[1]: var-lib-kubelet-pods-d783fb45\x2d9f0c\x2d4534\x2d801b\x2df71cbf6beb35-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8lnjf.mount: Deactivated successfully. Oct 8 20:05:22.483037 systemd[1]: var-lib-kubelet-pods-d783fb45\x2d9f0c\x2d4534\x2d801b\x2df71cbf6beb35-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 8 20:05:22.483134 systemd[1]: var-lib-kubelet-pods-d783fb45\x2d9f0c\x2d4534\x2d801b\x2df71cbf6beb35-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 8 20:05:23.417600 sshd[4357]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:23.432343 systemd[1]: Started sshd@23-10.0.0.154:22-10.0.0.1:55622.service - OpenSSH per-connection server daemon (10.0.0.1:55622). Oct 8 20:05:23.432758 systemd[1]: sshd@22-10.0.0.154:22-10.0.0.1:40956.service: Deactivated successfully. Oct 8 20:05:23.434578 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 20:05:23.435220 systemd-logind[1544]: Session 23 logged out. Waiting for processes to exit. Oct 8 20:05:23.436439 systemd-logind[1544]: Removed session 23. Oct 8 20:05:23.439816 kubelet[2716]: E1008 20:05:23.439789 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:05:23.467937 sshd[4525]: Accepted publickey for core from 10.0.0.1 port 55622 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:05:23.469108 sshd[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:23.473092 systemd-logind[1544]: New session 24 of user core. Oct 8 20:05:23.481403 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 20:05:24.238240 sshd[4525]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:24.245996 systemd[1]: Started sshd@24-10.0.0.154:22-10.0.0.1:55626.service - OpenSSH per-connection server daemon (10.0.0.1:55626). Oct 8 20:05:24.250723 systemd[1]: sshd@23-10.0.0.154:22-10.0.0.1:55622.service: Deactivated successfully. Oct 8 20:05:24.253363 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 20:05:24.256031 systemd-logind[1544]: Session 24 logged out. Waiting for processes to exit. Oct 8 20:05:24.261379 systemd-logind[1544]: Removed session 24. Oct 8 20:05:24.273734 kubelet[2716]: I1008 20:05:24.273695 2716 topology_manager.go:215] "Topology Admit Handler" podUID="91f14932-3a91-440a-8caa-578c02f916b4" podNamespace="kube-system" podName="cilium-9nwzn" Oct 8 20:05:24.273842 kubelet[2716]: E1008 20:05:24.273752 2716 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d783fb45-9f0c-4534-801b-f71cbf6beb35" containerName="mount-cgroup" Oct 8 20:05:24.273842 kubelet[2716]: E1008 20:05:24.273762 2716 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d783fb45-9f0c-4534-801b-f71cbf6beb35" containerName="mount-bpf-fs" Oct 8 20:05:24.273842 kubelet[2716]: E1008 20:05:24.273770 2716 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d783fb45-9f0c-4534-801b-f71cbf6beb35" containerName="cilium-agent" Oct 8 20:05:24.273842 kubelet[2716]: E1008 20:05:24.273776 2716 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4d706d7f-20c9-4e49-a33f-b1bc90644f25" containerName="cilium-operator" Oct 8 20:05:24.273842 kubelet[2716]: E1008 20:05:24.273783 2716 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d783fb45-9f0c-4534-801b-f71cbf6beb35" containerName="clean-cilium-state" Oct 8 20:05:24.273842 kubelet[2716]: E1008 20:05:24.273791 2716 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d783fb45-9f0c-4534-801b-f71cbf6beb35" containerName="apply-sysctl-overwrites" Oct 8 20:05:24.273842 kubelet[2716]: I1008 20:05:24.273814 2716 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d706d7f-20c9-4e49-a33f-b1bc90644f25" containerName="cilium-operator" Oct 8 20:05:24.273842 kubelet[2716]: I1008 20:05:24.273827 2716 memory_manager.go:354] "RemoveStaleState removing state" podUID="d783fb45-9f0c-4534-801b-f71cbf6beb35" containerName="cilium-agent" Oct 8 20:05:24.291102 kubelet[2716]: I1008 20:05:24.291003 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91f14932-3a91-440a-8caa-578c02f916b4-bpf-maps\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.291102 kubelet[2716]: I1008 20:05:24.291067 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91f14932-3a91-440a-8caa-578c02f916b4-cni-path\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.291102 kubelet[2716]: I1008 20:05:24.291096 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91f14932-3a91-440a-8caa-578c02f916b4-cilium-run\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.291102 kubelet[2716]: I1008 20:05:24.291146 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91f14932-3a91-440a-8caa-578c02f916b4-hostproc\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.292221 kubelet[2716]: I1008 20:05:24.291492 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91f14932-3a91-440a-8caa-578c02f916b4-cilium-config-path\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.292221 kubelet[2716]: I1008 20:05:24.292144 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91f14932-3a91-440a-8caa-578c02f916b4-host-proc-sys-kernel\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.292221 kubelet[2716]: I1008 20:05:24.292176 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91f14932-3a91-440a-8caa-578c02f916b4-clustermesh-secrets\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.292221 kubelet[2716]: I1008 20:05:24.292196 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/91f14932-3a91-440a-8caa-578c02f916b4-cilium-ipsec-secrets\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.292221 kubelet[2716]: I1008 20:05:24.292224 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7rdr\" (UniqueName: \"kubernetes.io/projected/91f14932-3a91-440a-8caa-578c02f916b4-kube-api-access-f7rdr\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.293139 kubelet[2716]: I1008 20:05:24.293009 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91f14932-3a91-440a-8caa-578c02f916b4-host-proc-sys-net\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.293139 kubelet[2716]: I1008 20:05:24.293054 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91f14932-3a91-440a-8caa-578c02f916b4-cilium-cgroup\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.293139 kubelet[2716]: I1008 20:05:24.293077 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91f14932-3a91-440a-8caa-578c02f916b4-xtables-lock\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.293249 kubelet[2716]: I1008 20:05:24.293236 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91f14932-3a91-440a-8caa-578c02f916b4-etc-cni-netd\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.293814 kubelet[2716]: I1008 20:05:24.293728 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91f14932-3a91-440a-8caa-578c02f916b4-lib-modules\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.293814 kubelet[2716]: I1008 20:05:24.293758 2716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91f14932-3a91-440a-8caa-578c02f916b4-hubble-tls\") pod \"cilium-9nwzn\" (UID: \"91f14932-3a91-440a-8caa-578c02f916b4\") " pod="kube-system/cilium-9nwzn" Oct 8 20:05:24.310265 sshd[4538]: Accepted publickey for core from 10.0.0.1 port 55626 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:05:24.311547 sshd[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:24.315244 systemd-logind[1544]: New session 25 of user core. Oct 8 20:05:24.328405 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 20:05:24.377380 sshd[4538]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:24.386392 systemd[1]: Started sshd@25-10.0.0.154:22-10.0.0.1:55628.service - OpenSSH per-connection server daemon (10.0.0.1:55628). Oct 8 20:05:24.386764 systemd[1]: sshd@24-10.0.0.154:22-10.0.0.1:55626.service: Deactivated successfully. Oct 8 20:05:24.389240 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 20:05:24.390480 systemd-logind[1544]: Session 25 logged out. Waiting for processes to exit. Oct 8 20:05:24.391500 systemd-logind[1544]: Removed session 25. Oct 8 20:05:24.432169 sshd[4547]: Accepted publickey for core from 10.0.0.1 port 55628 ssh2: RSA SHA256:PeFR0GwG3Km7u6+IJymPx7tkM/vpusnYsvzmiMSzq3A Oct 8 20:05:24.432705 sshd[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:24.436686 systemd-logind[1544]: New session 26 of user core. Oct 8 20:05:24.451352 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 20:05:24.500121 kubelet[2716]: E1008 20:05:24.500016 2716 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 20:05:24.580595 kubelet[2716]: E1008 20:05:24.578902 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:05:24.580808 containerd[1559]: time="2024-10-08T20:05:24.580759842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9nwzn,Uid:91f14932-3a91-440a-8caa-578c02f916b4,Namespace:kube-system,Attempt:0,}" Oct 8 20:05:24.596998 containerd[1559]: time="2024-10-08T20:05:24.596913556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:05:24.596998 containerd[1559]: time="2024-10-08T20:05:24.596967636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:05:24.596998 containerd[1559]: time="2024-10-08T20:05:24.596983716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:05:24.597197 containerd[1559]: time="2024-10-08T20:05:24.597061395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:05:24.629069 containerd[1559]: time="2024-10-08T20:05:24.629031705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9nwzn,Uid:91f14932-3a91-440a-8caa-578c02f916b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"28331020afbd7be63a52356de1e671c7f60b7b203f3e4f0f9da77ee258bc2b36\"" Oct 8 20:05:24.630358 kubelet[2716]: E1008 20:05:24.630103 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:05:24.632558 containerd[1559]: time="2024-10-08T20:05:24.632436375Z" level=info msg="CreateContainer within sandbox \"28331020afbd7be63a52356de1e671c7f60b7b203f3e4f0f9da77ee258bc2b36\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 20:05:24.648675 containerd[1559]: time="2024-10-08T20:05:24.648616769Z" level=info msg="CreateContainer within sandbox \"28331020afbd7be63a52356de1e671c7f60b7b203f3e4f0f9da77ee258bc2b36\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1f927d48b7a9bd92233f3e907f06bafc2fc5124a681cf68f1c29a9113ba4ca2f\"" Oct 8 20:05:24.649181 containerd[1559]: time="2024-10-08T20:05:24.649147008Z" level=info msg="StartContainer for \"1f927d48b7a9bd92233f3e907f06bafc2fc5124a681cf68f1c29a9113ba4ca2f\"" Oct 8 20:05:24.701219 containerd[1559]: time="2024-10-08T20:05:24.701130660Z" level=info msg="StartContainer for \"1f927d48b7a9bd92233f3e907f06bafc2fc5124a681cf68f1c29a9113ba4ca2f\" returns successfully" Oct 8 20:05:24.723137 kubelet[2716]: E1008 20:05:24.722638 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:05:24.743951 containerd[1559]: time="2024-10-08T20:05:24.743890499Z" level=info msg="shim disconnected" id=1f927d48b7a9bd92233f3e907f06bafc2fc5124a681cf68f1c29a9113ba4ca2f namespace=k8s.io Oct 8 20:05:24.743951 containerd[1559]: time="2024-10-08T20:05:24.743942179Z" level=warning msg="cleaning up after shim disconnected" id=1f927d48b7a9bd92233f3e907f06bafc2fc5124a681cf68f1c29a9113ba4ca2f namespace=k8s.io Oct 8 20:05:24.743951 containerd[1559]: time="2024-10-08T20:05:24.743952659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:05:25.702201 kubelet[2716]: I1008 20:05:25.702159 2716 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-08T20:05:25Z","lastTransitionTime":"2024-10-08T20:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 8 20:05:25.726030 kubelet[2716]: E1008 20:05:25.725983 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:05:25.730738 containerd[1559]: time="2024-10-08T20:05:25.730677075Z" level=info msg="CreateContainer within sandbox \"28331020afbd7be63a52356de1e671c7f60b7b203f3e4f0f9da77ee258bc2b36\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 20:05:25.743386 containerd[1559]: time="2024-10-08T20:05:25.743076481Z" level=info msg="CreateContainer within sandbox \"28331020afbd7be63a52356de1e671c7f60b7b203f3e4f0f9da77ee258bc2b36\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6360b3c0dcf0b5842aa54943383e16ae921410912d5038b81496024fec09defd\"" Oct 8 20:05:25.744920 containerd[1559]: time="2024-10-08T20:05:25.744772996Z" level=info msg="StartContainer for \"6360b3c0dcf0b5842aa54943383e16ae921410912d5038b81496024fec09defd\"" Oct 8 20:05:25.794432 containerd[1559]: time="2024-10-08T20:05:25.794371659Z" level=info msg="StartContainer for \"6360b3c0dcf0b5842aa54943383e16ae921410912d5038b81496024fec09defd\" returns successfully" Oct 8 20:05:25.814508 containerd[1559]: time="2024-10-08T20:05:25.814440044Z" level=info msg="shim disconnected" id=6360b3c0dcf0b5842aa54943383e16ae921410912d5038b81496024fec09defd namespace=k8s.io Oct 8 20:05:25.814508 containerd[1559]: time="2024-10-08T20:05:25.814493764Z" level=warning msg="cleaning up after shim disconnected" id=6360b3c0dcf0b5842aa54943383e16ae921410912d5038b81496024fec09defd namespace=k8s.io Oct 8 20:05:25.814508 containerd[1559]: time="2024-10-08T20:05:25.814501844Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:05:26.398749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6360b3c0dcf0b5842aa54943383e16ae921410912d5038b81496024fec09defd-rootfs.mount: Deactivated successfully. Oct 8 20:05:26.729436 kubelet[2716]: E1008 20:05:26.729363 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:05:26.732452 containerd[1559]: time="2024-10-08T20:05:26.732393962Z" level=info msg="CreateContainer within sandbox \"28331020afbd7be63a52356de1e671c7f60b7b203f3e4f0f9da77ee258bc2b36\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 20:05:26.746679 containerd[1559]: time="2024-10-08T20:05:26.746631524Z" level=info msg="CreateContainer within sandbox \"28331020afbd7be63a52356de1e671c7f60b7b203f3e4f0f9da77ee258bc2b36\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2525e28f3b5bc5f6d6d4d0ce7c85171d10df7c09161ce97b4cc72f7686967412\"" Oct 8 20:05:26.748518 containerd[1559]: time="2024-10-08T20:05:26.747177483Z" level=info msg="StartContainer for \"2525e28f3b5bc5f6d6d4d0ce7c85171d10df7c09161ce97b4cc72f7686967412\"" Oct 8 20:05:26.814825 containerd[1559]: time="2024-10-08T20:05:26.814786341Z" level=info msg="StartContainer for \"2525e28f3b5bc5f6d6d4d0ce7c85171d10df7c09161ce97b4cc72f7686967412\" returns successfully" Oct 8 20:05:26.834637 containerd[1559]: time="2024-10-08T20:05:26.834428848Z" level=info msg="shim disconnected" id=2525e28f3b5bc5f6d6d4d0ce7c85171d10df7c09161ce97b4cc72f7686967412 namespace=k8s.io Oct 8 20:05:26.834637 containerd[1559]: time="2024-10-08T20:05:26.834481808Z" level=warning msg="cleaning up after shim disconnected" id=2525e28f3b5bc5f6d6d4d0ce7c85171d10df7c09161ce97b4cc72f7686967412 namespace=k8s.io Oct 8 20:05:26.834637 containerd[1559]: time="2024-10-08T20:05:26.834498808Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:05:27.398813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2525e28f3b5bc5f6d6d4d0ce7c85171d10df7c09161ce97b4cc72f7686967412-rootfs.mount: Deactivated successfully. Oct 8 20:05:27.735492 kubelet[2716]: E1008 20:05:27.735447 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:05:27.738458 containerd[1559]: time="2024-10-08T20:05:27.738410629Z" level=info msg="CreateContainer within sandbox \"28331020afbd7be63a52356de1e671c7f60b7b203f3e4f0f9da77ee258bc2b36\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 20:05:27.755073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2080365758.mount: Deactivated successfully. Oct 8 20:05:27.760434 containerd[1559]: time="2024-10-08T20:05:27.760359132Z" level=info msg="CreateContainer within sandbox \"28331020afbd7be63a52356de1e671c7f60b7b203f3e4f0f9da77ee258bc2b36\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ca0a9bbd0ae8aa584c129609b5a5b7e734a1f2e23331cb95fd47930f418e70b9\"" Oct 8 20:05:27.762297 containerd[1559]: time="2024-10-08T20:05:27.761508489Z" level=info msg="StartContainer for \"ca0a9bbd0ae8aa584c129609b5a5b7e734a1f2e23331cb95fd47930f418e70b9\"" Oct 8 20:05:27.803465 containerd[1559]: time="2024-10-08T20:05:27.803421579Z" level=info msg="StartContainer for \"ca0a9bbd0ae8aa584c129609b5a5b7e734a1f2e23331cb95fd47930f418e70b9\" returns successfully" Oct 8 20:05:27.821965 containerd[1559]: time="2024-10-08T20:05:27.821910931Z" level=info msg="shim disconnected" id=ca0a9bbd0ae8aa584c129609b5a5b7e734a1f2e23331cb95fd47930f418e70b9 namespace=k8s.io Oct 8 20:05:27.822173 containerd[1559]: time="2024-10-08T20:05:27.822154330Z" level=warning msg="cleaning up after shim disconnected" id=ca0a9bbd0ae8aa584c129609b5a5b7e734a1f2e23331cb95fd47930f418e70b9 namespace=k8s.io Oct 8 20:05:27.822231 containerd[1559]: time="2024-10-08T20:05:27.822219130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:05:28.398929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca0a9bbd0ae8aa584c129609b5a5b7e734a1f2e23331cb95fd47930f418e70b9-rootfs.mount: Deactivated successfully. Oct 8 20:05:28.739255 kubelet[2716]: E1008 20:05:28.739213 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:05:28.742690 containerd[1559]: time="2024-10-08T20:05:28.742557171Z" level=info msg="CreateContainer within sandbox \"28331020afbd7be63a52356de1e671c7f60b7b203f3e4f0f9da77ee258bc2b36\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 20:05:28.751649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2232565144.mount: Deactivated successfully. Oct 8 20:05:28.752682 containerd[1559]: time="2024-10-08T20:05:28.752625305Z" level=info msg="CreateContainer within sandbox \"28331020afbd7be63a52356de1e671c7f60b7b203f3e4f0f9da77ee258bc2b36\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b6a0c04b79cc56643616496bfd54e268950fc090083467bb8128a27a7b468b7b\"" Oct 8 20:05:28.754328 containerd[1559]: time="2024-10-08T20:05:28.754285461Z" level=info msg="StartContainer for \"b6a0c04b79cc56643616496bfd54e268950fc090083467bb8128a27a7b468b7b\"" Oct 8 20:05:28.798229 containerd[1559]: time="2024-10-08T20:05:28.798179949Z" level=info msg="StartContainer for \"b6a0c04b79cc56643616496bfd54e268950fc090083467bb8128a27a7b468b7b\" returns successfully" Oct 8 20:05:29.064138 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Oct 8 20:05:29.744079 kubelet[2716]: E1008 20:05:29.744047 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:05:29.761773 kubelet[2716]: I1008 20:05:29.761726 2716 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9nwzn" podStartSLOduration=5.761688862 podStartE2EDuration="5.761688862s" podCreationTimestamp="2024-10-08 20:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:05:29.761463383 +0000 UTC m=+85.411305968" watchObservedRunningTime="2024-10-08 20:05:29.761688862 +0000 UTC m=+85.411531407" Oct 8 20:05:30.746656 kubelet[2716]: E1008 20:05:30.746280 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:05:31.874287 systemd-networkd[1237]: lxc_health: Link UP Oct 8 20:05:31.884847 systemd-networkd[1237]: lxc_health: Gained carrier Oct 8 20:05:32.597911 kubelet[2716]: E1008 20:05:32.597864 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:05:32.752002 kubelet[2716]: E1008 20:05:32.751960 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:05:33.589223 systemd-networkd[1237]: lxc_health: Gained IPv6LL Oct 8 20:05:33.754415 kubelet[2716]: E1008 20:05:33.754388 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 20:05:35.009020 systemd[1]: run-containerd-runc-k8s.io-b6a0c04b79cc56643616496bfd54e268950fc090083467bb8128a27a7b468b7b-runc.2qsHxE.mount: Deactivated successfully. Oct 8 20:05:37.182041 sshd[4547]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:37.185555 systemd[1]: sshd@25-10.0.0.154:22-10.0.0.1:55628.service: Deactivated successfully. Oct 8 20:05:37.188312 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 20:05:37.189531 systemd-logind[1544]: Session 26 logged out. Waiting for processes to exit. Oct 8 20:05:37.190595 systemd-logind[1544]: Removed session 26. Oct 8 20:05:37.440818 kubelet[2716]: E1008 20:05:37.440655 2716 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"