Oct 9 00:52:59.887421 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 9 00:52:59.887441 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Oct 8 23:34:40 -00 2024 Oct 9 00:52:59.887451 kernel: KASLR enabled Oct 9 00:52:59.887456 kernel: efi: EFI v2.7 by EDK II Oct 9 00:52:59.887462 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 9 00:52:59.887467 kernel: random: crng init done Oct 9 00:52:59.887474 kernel: secureboot: Secure boot disabled Oct 9 00:52:59.887480 kernel: ACPI: Early table checksum verification disabled Oct 9 00:52:59.887486 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 9 00:52:59.887493 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 9 00:52:59.887499 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:52:59.887505 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:52:59.887511 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:52:59.887517 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:52:59.887524 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:52:59.887531 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:52:59.887538 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:52:59.887544 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:52:59.887550 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:52:59.887556 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 9 00:52:59.887562 kernel: NUMA: Failed to initialise from firmware Oct 9 00:52:59.887568 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 00:52:59.887575 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Oct 9 00:52:59.887581 kernel: Zone ranges: Oct 9 00:52:59.887587 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 00:52:59.887594 kernel: DMA32 empty Oct 9 00:52:59.887600 kernel: Normal empty Oct 9 00:52:59.887606 kernel: Movable zone start for each node Oct 9 00:52:59.887613 kernel: Early memory node ranges Oct 9 00:52:59.887619 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 9 00:52:59.887625 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 9 00:52:59.887631 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 9 00:52:59.887637 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 9 00:52:59.887643 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 9 00:52:59.887649 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 9 00:52:59.887656 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 9 00:52:59.887662 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 00:52:59.887669 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 9 00:52:59.887675 kernel: psci: probing for conduit method from ACPI. Oct 9 00:52:59.887681 kernel: psci: PSCIv1.1 detected in firmware. Oct 9 00:52:59.887690 kernel: psci: Using standard PSCI v0.2 function IDs Oct 9 00:52:59.887696 kernel: psci: Trusted OS migration not required Oct 9 00:52:59.887703 kernel: psci: SMC Calling Convention v1.1 Oct 9 00:52:59.887711 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 9 00:52:59.887717 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 9 00:52:59.887724 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 9 00:52:59.887740 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 9 00:52:59.887747 kernel: Detected PIPT I-cache on CPU0 Oct 9 00:52:59.887754 kernel: CPU features: detected: GIC system register CPU interface Oct 9 00:52:59.887761 kernel: CPU features: detected: Hardware dirty bit management Oct 9 00:52:59.887767 kernel: CPU features: detected: Spectre-v4 Oct 9 00:52:59.887774 kernel: CPU features: detected: Spectre-BHB Oct 9 00:52:59.887780 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 9 00:52:59.887789 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 9 00:52:59.887796 kernel: CPU features: detected: ARM erratum 1418040 Oct 9 00:52:59.887802 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 9 00:52:59.887809 kernel: alternatives: applying boot alternatives Oct 9 00:52:59.887816 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d2d67b5440410ae2d0aa86eba97891969be0a7a421fa55f13442706ef7ed2a5e Oct 9 00:52:59.887823 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 00:52:59.887830 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 00:52:59.887836 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 00:52:59.887843 kernel: Fallback order for Node 0: 0 Oct 9 00:52:59.887849 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 9 00:52:59.887856 kernel: Policy zone: DMA Oct 9 00:52:59.887863 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 00:52:59.887870 kernel: software IO TLB: area num 4. Oct 9 00:52:59.887876 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 9 00:52:59.887883 kernel: Memory: 2386404K/2572288K available (10240K kernel code, 2184K rwdata, 8092K rodata, 39552K init, 897K bss, 185884K reserved, 0K cma-reserved) Oct 9 00:52:59.887890 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 00:52:59.887897 kernel: trace event string verifier disabled Oct 9 00:52:59.887903 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 00:52:59.887910 kernel: rcu: RCU event tracing is enabled. Oct 9 00:52:59.887917 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 00:52:59.887924 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 00:52:59.887930 kernel: Tracing variant of Tasks RCU enabled. Oct 9 00:52:59.887937 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 00:52:59.887945 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 00:52:59.887952 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 9 00:52:59.887958 kernel: GICv3: 256 SPIs implemented Oct 9 00:52:59.887965 kernel: GICv3: 0 Extended SPIs implemented Oct 9 00:52:59.887971 kernel: Root IRQ handler: gic_handle_irq Oct 9 00:52:59.887978 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 9 00:52:59.887984 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 9 00:52:59.887991 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 9 00:52:59.887998 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 9 00:52:59.888005 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 9 00:52:59.888011 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 9 00:52:59.888019 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 9 00:52:59.888025 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 00:52:59.888032 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 00:52:59.888039 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 9 00:52:59.888045 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 9 00:52:59.888052 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 9 00:52:59.888059 kernel: arm-pv: using stolen time PV Oct 9 00:52:59.888065 kernel: Console: colour dummy device 80x25 Oct 9 00:52:59.888072 kernel: ACPI: Core revision 20230628 Oct 9 00:52:59.888079 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 9 00:52:59.888086 kernel: pid_max: default: 32768 minimum: 301 Oct 9 00:52:59.888094 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 00:52:59.888100 kernel: landlock: Up and running. Oct 9 00:52:59.888107 kernel: SELinux: Initializing. Oct 9 00:52:59.888114 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 00:52:59.888121 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 00:52:59.888127 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:52:59.888134 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:52:59.888141 kernel: rcu: Hierarchical SRCU implementation. Oct 9 00:52:59.888148 kernel: rcu: Max phase no-delay instances is 400. Oct 9 00:52:59.888155 kernel: Platform MSI: ITS@0x8080000 domain created Oct 9 00:52:59.888162 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 9 00:52:59.888169 kernel: Remapping and enabling EFI services. Oct 9 00:52:59.888176 kernel: smp: Bringing up secondary CPUs ... Oct 9 00:52:59.888183 kernel: Detected PIPT I-cache on CPU1 Oct 9 00:52:59.888189 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 9 00:52:59.888197 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 9 00:52:59.888203 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 00:52:59.888210 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 9 00:52:59.888218 kernel: Detected PIPT I-cache on CPU2 Oct 9 00:52:59.888225 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 9 00:52:59.888236 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 9 00:52:59.888244 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 00:52:59.888251 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 9 00:52:59.888258 kernel: Detected PIPT I-cache on CPU3 Oct 9 00:52:59.888265 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 9 00:52:59.888272 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 9 00:52:59.888280 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 00:52:59.888288 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 9 00:52:59.888295 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 00:52:59.888302 kernel: SMP: Total of 4 processors activated. Oct 9 00:52:59.888328 kernel: CPU features: detected: 32-bit EL0 Support Oct 9 00:52:59.888339 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 9 00:52:59.888346 kernel: CPU features: detected: Common not Private translations Oct 9 00:52:59.888353 kernel: CPU features: detected: CRC32 instructions Oct 9 00:52:59.888360 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 9 00:52:59.888369 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 9 00:52:59.888376 kernel: CPU features: detected: LSE atomic instructions Oct 9 00:52:59.888383 kernel: CPU features: detected: Privileged Access Never Oct 9 00:52:59.888390 kernel: CPU features: detected: RAS Extension Support Oct 9 00:52:59.888397 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 9 00:52:59.888404 kernel: CPU: All CPU(s) started at EL1 Oct 9 00:52:59.888412 kernel: alternatives: applying system-wide alternatives Oct 9 00:52:59.888419 kernel: devtmpfs: initialized Oct 9 00:52:59.888426 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 00:52:59.888435 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 00:52:59.888442 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 00:52:59.888449 kernel: SMBIOS 3.0.0 present. Oct 9 00:52:59.888456 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 9 00:52:59.888464 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 00:52:59.888471 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 9 00:52:59.888478 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 9 00:52:59.888485 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 9 00:52:59.888492 kernel: audit: initializing netlink subsys (disabled) Oct 9 00:52:59.888501 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Oct 9 00:52:59.888508 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 00:52:59.888515 kernel: cpuidle: using governor menu Oct 9 00:52:59.888522 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 9 00:52:59.888529 kernel: ASID allocator initialised with 32768 entries Oct 9 00:52:59.888536 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 00:52:59.888543 kernel: Serial: AMBA PL011 UART driver Oct 9 00:52:59.888551 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 9 00:52:59.888558 kernel: Modules: 0 pages in range for non-PLT usage Oct 9 00:52:59.888566 kernel: Modules: 508992 pages in range for PLT usage Oct 9 00:52:59.888573 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 00:52:59.888580 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 00:52:59.888587 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 9 00:52:59.888594 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 9 00:52:59.888601 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 00:52:59.888608 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 00:52:59.888616 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 9 00:52:59.888623 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 9 00:52:59.888630 kernel: ACPI: Added _OSI(Module Device) Oct 9 00:52:59.888638 kernel: ACPI: Added _OSI(Processor Device) Oct 9 00:52:59.888645 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 00:52:59.888652 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 00:52:59.888660 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 00:52:59.888667 kernel: ACPI: Interpreter enabled Oct 9 00:52:59.888674 kernel: ACPI: Using GIC for interrupt routing Oct 9 00:52:59.888681 kernel: ACPI: MCFG table detected, 1 entries Oct 9 00:52:59.888688 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 9 00:52:59.888695 kernel: printk: console [ttyAMA0] enabled Oct 9 00:52:59.888703 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 00:52:59.888833 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 00:52:59.888905 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 9 00:52:59.888971 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 9 00:52:59.889034 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 9 00:52:59.889096 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 9 00:52:59.889106 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 9 00:52:59.889115 kernel: PCI host bridge to bus 0000:00 Oct 9 00:52:59.889181 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 9 00:52:59.889239 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 9 00:52:59.889296 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 9 00:52:59.889385 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 00:52:59.889463 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 9 00:52:59.889540 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 00:52:59.889608 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 9 00:52:59.889673 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 9 00:52:59.889744 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 9 00:52:59.889810 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 9 00:52:59.889873 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 9 00:52:59.889938 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 9 00:52:59.889998 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 9 00:52:59.890053 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 9 00:52:59.890109 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 9 00:52:59.890118 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 9 00:52:59.890126 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 9 00:52:59.890133 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 9 00:52:59.890140 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 9 00:52:59.890147 kernel: iommu: Default domain type: Translated Oct 9 00:52:59.890156 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 9 00:52:59.890163 kernel: efivars: Registered efivars operations Oct 9 00:52:59.890170 kernel: vgaarb: loaded Oct 9 00:52:59.890177 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 9 00:52:59.890184 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 00:52:59.890192 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 00:52:59.890199 kernel: pnp: PnP ACPI init Oct 9 00:52:59.890267 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 9 00:52:59.890277 kernel: pnp: PnP ACPI: found 1 devices Oct 9 00:52:59.890286 kernel: NET: Registered PF_INET protocol family Oct 9 00:52:59.890294 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 00:52:59.890301 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 00:52:59.890308 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 00:52:59.890328 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 00:52:59.890336 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 00:52:59.890343 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 00:52:59.890350 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 00:52:59.890359 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 00:52:59.890366 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 00:52:59.890374 kernel: PCI: CLS 0 bytes, default 64 Oct 9 00:52:59.890381 kernel: kvm [1]: HYP mode not available Oct 9 00:52:59.890388 kernel: Initialise system trusted keyrings Oct 9 00:52:59.890395 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 00:52:59.890402 kernel: Key type asymmetric registered Oct 9 00:52:59.890409 kernel: Asymmetric key parser 'x509' registered Oct 9 00:52:59.890416 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 9 00:52:59.890423 kernel: io scheduler mq-deadline registered Oct 9 00:52:59.890431 kernel: io scheduler kyber registered Oct 9 00:52:59.890438 kernel: io scheduler bfq registered Oct 9 00:52:59.890445 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 9 00:52:59.890452 kernel: ACPI: button: Power Button [PWRB] Oct 9 00:52:59.890460 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 9 00:52:59.890531 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 9 00:52:59.890541 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 00:52:59.890548 kernel: thunder_xcv, ver 1.0 Oct 9 00:52:59.890555 kernel: thunder_bgx, ver 1.0 Oct 9 00:52:59.890564 kernel: nicpf, ver 1.0 Oct 9 00:52:59.890571 kernel: nicvf, ver 1.0 Oct 9 00:52:59.890642 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 9 00:52:59.890704 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-09T00:52:59 UTC (1728435179) Oct 9 00:52:59.890713 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 9 00:52:59.890721 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 9 00:52:59.890734 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 9 00:52:59.890743 kernel: watchdog: Hard watchdog permanently disabled Oct 9 00:52:59.890753 kernel: NET: Registered PF_INET6 protocol family Oct 9 00:52:59.890760 kernel: Segment Routing with IPv6 Oct 9 00:52:59.890767 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 00:52:59.890774 kernel: NET: Registered PF_PACKET protocol family Oct 9 00:52:59.890781 kernel: Key type dns_resolver registered Oct 9 00:52:59.890788 kernel: registered taskstats version 1 Oct 9 00:52:59.890795 kernel: Loading compiled-in X.509 certificates Oct 9 00:52:59.890802 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 80611b0a9480eaf6d787b908c6349fdb5d07fa81' Oct 9 00:52:59.890809 kernel: Key type .fscrypt registered Oct 9 00:52:59.890818 kernel: Key type fscrypt-provisioning registered Oct 9 00:52:59.890825 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 00:52:59.890832 kernel: ima: Allocated hash algorithm: sha1 Oct 9 00:52:59.890839 kernel: ima: No architecture policies found Oct 9 00:52:59.890846 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 9 00:52:59.890853 kernel: clk: Disabling unused clocks Oct 9 00:52:59.890860 kernel: Freeing unused kernel memory: 39552K Oct 9 00:52:59.890867 kernel: Run /init as init process Oct 9 00:52:59.890874 kernel: with arguments: Oct 9 00:52:59.890882 kernel: /init Oct 9 00:52:59.890889 kernel: with environment: Oct 9 00:52:59.890896 kernel: HOME=/ Oct 9 00:52:59.890902 kernel: TERM=linux Oct 9 00:52:59.890909 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 00:52:59.890918 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 00:52:59.890927 systemd[1]: Detected virtualization kvm. Oct 9 00:52:59.890936 systemd[1]: Detected architecture arm64. Oct 9 00:52:59.890943 systemd[1]: Running in initrd. Oct 9 00:52:59.890950 systemd[1]: No hostname configured, using default hostname. Oct 9 00:52:59.890958 systemd[1]: Hostname set to <localhost>. Oct 9 00:52:59.890965 systemd[1]: Initializing machine ID from VM UUID. Oct 9 00:52:59.890973 systemd[1]: Queued start job for default target initrd.target. Oct 9 00:52:59.890980 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:52:59.890988 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:52:59.890997 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 00:52:59.891005 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 00:52:59.891013 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 00:52:59.891020 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 00:52:59.891029 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 00:52:59.891038 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 00:52:59.891045 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:52:59.891054 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:52:59.891061 systemd[1]: Reached target paths.target - Path Units. Oct 9 00:52:59.891069 systemd[1]: Reached target slices.target - Slice Units. Oct 9 00:52:59.891076 systemd[1]: Reached target swap.target - Swaps. Oct 9 00:52:59.891084 systemd[1]: Reached target timers.target - Timer Units. Oct 9 00:52:59.891091 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 00:52:59.891099 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 00:52:59.891107 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 00:52:59.891114 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 00:52:59.891123 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:52:59.891131 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 00:52:59.891138 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:52:59.891146 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 00:52:59.891153 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 00:52:59.891161 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 00:52:59.891169 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 00:52:59.891176 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 00:52:59.891185 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 00:52:59.891193 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 00:52:59.891200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:52:59.891208 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 00:52:59.891215 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:52:59.891223 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 00:52:59.891233 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 00:52:59.891256 systemd-journald[238]: Collecting audit messages is disabled. Oct 9 00:52:59.891276 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:52:59.891285 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:52:59.891293 systemd-journald[238]: Journal started Oct 9 00:52:59.891321 systemd-journald[238]: Runtime Journal (/run/log/journal/feddafe1e1cc484abfaf2623a8a3c2fa) is 5.9M, max 47.3M, 41.4M free. Oct 9 00:52:59.883199 systemd-modules-load[239]: Inserted module 'overlay' Oct 9 00:52:59.893258 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 00:52:59.898340 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 00:52:59.899111 systemd-modules-load[239]: Inserted module 'br_netfilter' Oct 9 00:52:59.899819 kernel: Bridge firewalling registered Oct 9 00:52:59.901441 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:52:59.902794 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 00:52:59.905454 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 00:52:59.907208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 00:52:59.909138 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:52:59.918527 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:52:59.919688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:52:59.921412 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:52:59.923973 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 00:52:59.929335 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:52:59.935631 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 00:52:59.944429 dracut-cmdline[274]: dracut-dracut-053 Oct 9 00:52:59.946825 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d2d67b5440410ae2d0aa86eba97891969be0a7a421fa55f13442706ef7ed2a5e Oct 9 00:52:59.962059 systemd-resolved[277]: Positive Trust Anchors: Oct 9 00:52:59.962134 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 00:52:59.962165 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 00:52:59.966797 systemd-resolved[277]: Defaulting to hostname 'linux'. Oct 9 00:52:59.967683 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 00:52:59.969190 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:53:00.012342 kernel: SCSI subsystem initialized Oct 9 00:53:00.017332 kernel: Loading iSCSI transport class v2.0-870. Oct 9 00:53:00.024352 kernel: iscsi: registered transport (tcp) Oct 9 00:53:00.036417 kernel: iscsi: registered transport (qla4xxx) Oct 9 00:53:00.036436 kernel: QLogic iSCSI HBA Driver Oct 9 00:53:00.078377 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 00:53:00.084489 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 00:53:00.101213 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 00:53:00.101261 kernel: device-mapper: uevent: version 1.0.3 Oct 9 00:53:00.101274 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 00:53:00.146330 kernel: raid6: neonx8 gen() 15757 MB/s Oct 9 00:53:00.163324 kernel: raid6: neonx4 gen() 15650 MB/s Oct 9 00:53:00.180327 kernel: raid6: neonx2 gen() 13204 MB/s Oct 9 00:53:00.197321 kernel: raid6: neonx1 gen() 10489 MB/s Oct 9 00:53:00.214322 kernel: raid6: int64x8 gen() 6949 MB/s Oct 9 00:53:00.231325 kernel: raid6: int64x4 gen() 7343 MB/s Oct 9 00:53:00.250327 kernel: raid6: int64x2 gen() 6991 MB/s Oct 9 00:53:00.267339 kernel: raid6: int64x1 gen() 5047 MB/s Oct 9 00:53:00.267370 kernel: raid6: using algorithm neonx8 gen() 15757 MB/s Oct 9 00:53:00.284341 kernel: raid6: .... xor() 12013 MB/s, rmw enabled Oct 9 00:53:00.284357 kernel: raid6: using neon recovery algorithm Oct 9 00:53:00.289330 kernel: xor: measuring software checksum speed Oct 9 00:53:00.289350 kernel: 8regs : 19213 MB/sec Oct 9 00:53:00.290720 kernel: 32regs : 18292 MB/sec Oct 9 00:53:00.290739 kernel: arm64_neon : 26778 MB/sec Oct 9 00:53:00.290749 kernel: xor: using function: arm64_neon (26778 MB/sec) Oct 9 00:53:00.342350 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 00:53:00.352811 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 00:53:00.360486 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:53:00.371374 systemd-udevd[461]: Using default interface naming scheme 'v255'. Oct 9 00:53:00.374452 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:53:00.384486 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 00:53:00.396815 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Oct 9 00:53:00.422861 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 00:53:00.433493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 00:53:00.475755 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:53:00.482884 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 00:53:00.496157 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 00:53:00.497510 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 00:53:00.500879 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:53:00.502449 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 00:53:00.509441 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 00:53:00.520370 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 00:53:00.531333 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 9 00:53:00.536905 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 00:53:00.537055 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 00:53:00.537115 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:53:00.539782 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:53:00.544145 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 00:53:00.544161 kernel: GPT:9289727 != 19775487 Oct 9 00:53:00.544175 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 00:53:00.544184 kernel: GPT:9289727 != 19775487 Oct 9 00:53:00.544195 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 00:53:00.544204 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:53:00.543152 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:53:00.543235 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:53:00.545366 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:53:00.552577 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:53:00.558340 kernel: BTRFS: device fsid c25b3a2f-539f-42a7-8842-97b35e474647 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (509) Oct 9 00:53:00.560346 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (523) Oct 9 00:53:00.561072 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 00:53:00.567532 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:53:00.574771 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 00:53:00.578332 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 00:53:00.579171 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 00:53:00.584239 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 00:53:00.598501 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 00:53:00.600047 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:53:00.605287 disk-uuid[553]: Primary Header is updated. Oct 9 00:53:00.605287 disk-uuid[553]: Secondary Entries is updated. Oct 9 00:53:00.605287 disk-uuid[553]: Secondary Header is updated. Oct 9 00:53:00.608392 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:53:00.618308 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:53:01.619359 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:53:01.621258 disk-uuid[554]: The operation has completed successfully. Oct 9 00:53:01.643418 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 00:53:01.643509 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 00:53:01.667594 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 00:53:01.670545 sh[577]: Success Oct 9 00:53:01.687109 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 9 00:53:01.733746 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 00:53:01.748907 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 00:53:01.749640 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 00:53:01.761907 kernel: BTRFS info (device dm-0): first mount of filesystem c25b3a2f-539f-42a7-8842-97b35e474647 Oct 9 00:53:01.761942 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 9 00:53:01.761952 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 00:53:01.761963 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 00:53:01.762657 kernel: BTRFS info (device dm-0): using free space tree Oct 9 00:53:01.767505 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 00:53:01.768521 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 00:53:01.778449 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 00:53:01.779790 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 00:53:01.793666 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:53:01.793719 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 00:53:01.793732 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:53:01.797788 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:53:01.806216 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 00:53:01.807342 kernel: BTRFS info (device vda6): last unmount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:53:01.814827 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 00:53:01.822530 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 00:53:01.897697 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 00:53:01.909497 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 00:53:01.927072 ignition[672]: Ignition 2.19.0 Oct 9 00:53:01.927082 ignition[672]: Stage: fetch-offline Oct 9 00:53:01.927115 ignition[672]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:53:01.927123 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:53:01.927355 ignition[672]: parsed url from cmdline: "" Oct 9 00:53:01.927358 ignition[672]: no config URL provided Oct 9 00:53:01.927363 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 00:53:01.927370 ignition[672]: no config at "/usr/lib/ignition/user.ign" Oct 9 00:53:01.927396 ignition[672]: op(1): [started] loading QEMU firmware config module Oct 9 00:53:01.927401 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 00:53:01.937922 systemd-networkd[766]: lo: Link UP Oct 9 00:53:01.937925 systemd-networkd[766]: lo: Gained carrier Oct 9 00:53:01.938734 systemd-networkd[766]: Enumeration completed Oct 9 00:53:01.940400 ignition[672]: op(1): [finished] loading QEMU firmware config module Oct 9 00:53:01.939033 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 00:53:01.939141 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:53:01.939144 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 00:53:01.939938 systemd-networkd[766]: eth0: Link UP Oct 9 00:53:01.939941 systemd-networkd[766]: eth0: Gained carrier Oct 9 00:53:01.939948 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:53:01.942590 systemd[1]: Reached target network.target - Network. Oct 9 00:53:01.961395 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 00:53:01.986919 ignition[672]: parsing config with SHA512: cfd6d8ceffb40e37d466ae6ed5c26b91c1a76a4b2e26ff4e95ebb5e0fbcc00f74e7af26545c6f5aa58b9bf88a9c0281a41c978abcba59230b48118b76b44ee35 Oct 9 00:53:01.991225 unknown[672]: fetched base config from "system" Oct 9 00:53:01.991235 unknown[672]: fetched user config from "qemu" Oct 9 00:53:01.991647 ignition[672]: fetch-offline: fetch-offline passed Oct 9 00:53:01.991718 ignition[672]: Ignition finished successfully Oct 9 00:53:01.993260 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 00:53:01.994563 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 00:53:02.004973 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 00:53:02.016983 ignition[773]: Ignition 2.19.0 Oct 9 00:53:02.016990 ignition[773]: Stage: kargs Oct 9 00:53:02.017137 ignition[773]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:53:02.017146 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:53:02.018010 ignition[773]: kargs: kargs passed Oct 9 00:53:02.018055 ignition[773]: Ignition finished successfully Oct 9 00:53:02.022360 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 00:53:02.024019 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 00:53:02.039545 ignition[780]: Ignition 2.19.0 Oct 9 00:53:02.040232 ignition[780]: Stage: disks Oct 9 00:53:02.040435 ignition[780]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:53:02.040446 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:53:02.041336 ignition[780]: disks: disks passed Oct 9 00:53:02.043594 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 00:53:02.041382 ignition[780]: Ignition finished successfully Oct 9 00:53:02.044488 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 00:53:02.048398 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 00:53:02.049467 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 00:53:02.051001 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 00:53:02.052409 systemd[1]: Reached target basic.target - Basic System. Oct 9 00:53:02.064674 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 00:53:02.080488 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 00:53:02.086923 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 00:53:02.103464 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 00:53:02.148251 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 00:53:02.149488 kernel: EXT4-fs (vda9): mounted filesystem 3a4adf89-ce2b-46a9-8e1a-433a27a27d16 r/w with ordered data mode. Quota mode: none. Oct 9 00:53:02.149281 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 00:53:02.162500 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 00:53:02.165408 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 00:53:02.166175 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 00:53:02.166216 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 00:53:02.166237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 00:53:02.171141 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 00:53:02.172904 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 00:53:02.178337 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Oct 9 00:53:02.180333 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:53:02.180370 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 00:53:02.181654 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:53:02.185338 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:53:02.186372 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 00:53:02.224136 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 00:53:02.228426 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Oct 9 00:53:02.232948 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 00:53:02.239100 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 00:53:02.315986 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 00:53:02.325299 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 00:53:02.327581 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 00:53:02.331353 kernel: BTRFS info (device vda6): last unmount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:53:02.352904 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 00:53:02.356705 ignition[914]: INFO : Ignition 2.19.0 Oct 9 00:53:02.356705 ignition[914]: INFO : Stage: mount Oct 9 00:53:02.358019 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:53:02.358019 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:53:02.358019 ignition[914]: INFO : mount: mount passed Oct 9 00:53:02.358019 ignition[914]: INFO : Ignition finished successfully Oct 9 00:53:02.358997 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 00:53:02.373487 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 00:53:02.760795 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 00:53:02.769487 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 00:53:02.774344 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) Oct 9 00:53:02.776345 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:53:02.776385 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 00:53:02.776396 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:53:02.782383 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:53:02.783326 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 00:53:02.801068 ignition[945]: INFO : Ignition 2.19.0 Oct 9 00:53:02.801068 ignition[945]: INFO : Stage: files Oct 9 00:53:02.802295 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:53:02.802295 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:53:02.802295 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Oct 9 00:53:02.804845 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 00:53:02.804845 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 00:53:02.806765 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 00:53:02.806765 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 00:53:02.806765 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 00:53:02.805529 unknown[945]: wrote ssh authorized keys file for user: core Oct 9 00:53:02.810301 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 9 00:53:02.810301 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 9 00:53:02.875558 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 00:53:03.126802 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 9 00:53:03.126802 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 9 00:53:03.129648 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 9 00:53:03.330514 systemd-networkd[766]: eth0: Gained IPv6LL Oct 9 00:53:03.446063 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 00:53:03.526141 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 9 00:53:03.528594 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 9 00:53:03.528594 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 00:53:03.528594 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 00:53:03.528594 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 00:53:03.528594 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 00:53:03.528594 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 00:53:03.528594 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 00:53:03.528594 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 00:53:03.528594 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 00:53:03.528594 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 00:53:03.540529 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 9 00:53:03.540529 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 9 00:53:03.540529 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 9 00:53:03.540529 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Oct 9 00:53:03.819199 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 9 00:53:04.495155 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 9 00:53:04.495155 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 9 00:53:04.498003 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 00:53:04.499471 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 00:53:04.499471 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 9 00:53:04.499471 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 9 00:53:04.499471 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 00:53:04.499471 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 00:53:04.499471 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 9 00:53:04.499471 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 00:53:04.520542 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 00:53:04.523794 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 00:53:04.525792 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 00:53:04.525792 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 9 00:53:04.525792 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 00:53:04.525792 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 00:53:04.525792 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 00:53:04.525792 ignition[945]: INFO : files: files passed Oct 9 00:53:04.525792 ignition[945]: INFO : Ignition finished successfully Oct 9 00:53:04.526992 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 00:53:04.532468 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 00:53:04.534036 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 00:53:04.535995 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 00:53:04.536075 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 00:53:04.541536 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 00:53:04.544813 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:53:04.544813 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:53:04.547361 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:53:04.548159 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 00:53:04.549595 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 00:53:04.558512 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 00:53:04.575798 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 00:53:04.575895 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 00:53:04.577485 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 00:53:04.578814 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 00:53:04.580073 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 00:53:04.580812 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 00:53:04.595238 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 00:53:04.603483 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 00:53:04.610950 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:53:04.611881 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:53:04.613403 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 00:53:04.614665 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 00:53:04.614783 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 00:53:04.616565 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 00:53:04.618057 systemd[1]: Stopped target basic.target - Basic System. Oct 9 00:53:04.619200 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 00:53:04.620416 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 00:53:04.621876 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 00:53:04.623225 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 00:53:04.624542 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 00:53:04.625922 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 00:53:04.627345 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 00:53:04.628601 systemd[1]: Stopped target swap.target - Swaps. Oct 9 00:53:04.629690 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 00:53:04.629805 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 00:53:04.631506 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:53:04.632913 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:53:04.634240 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 00:53:04.638375 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:53:04.639273 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 00:53:04.639403 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 00:53:04.641444 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 00:53:04.641557 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 00:53:04.642957 systemd[1]: Stopped target paths.target - Path Units. Oct 9 00:53:04.644077 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 00:53:04.647368 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:53:04.648289 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 00:53:04.649836 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 00:53:04.651063 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 00:53:04.651148 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 00:53:04.652217 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 00:53:04.652296 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 00:53:04.653387 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 00:53:04.653492 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 00:53:04.654772 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 00:53:04.654871 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 00:53:04.667472 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 00:53:04.668857 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 00:53:04.669510 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 00:53:04.669624 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:53:04.670947 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 00:53:04.671037 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 00:53:04.676081 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 00:53:04.676523 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 00:53:04.680725 ignition[1001]: INFO : Ignition 2.19.0 Oct 9 00:53:04.680725 ignition[1001]: INFO : Stage: umount Oct 9 00:53:04.682811 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:53:04.682811 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:53:04.682811 ignition[1001]: INFO : umount: umount passed Oct 9 00:53:04.682811 ignition[1001]: INFO : Ignition finished successfully Oct 9 00:53:04.682590 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 00:53:04.683683 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 00:53:04.683784 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 00:53:04.685298 systemd[1]: Stopped target network.target - Network. Oct 9 00:53:04.686229 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 00:53:04.686287 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 00:53:04.688237 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 00:53:04.688280 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 00:53:04.689473 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 00:53:04.689511 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 00:53:04.691121 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 00:53:04.691166 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 00:53:04.692663 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 00:53:04.694270 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 00:53:04.696804 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 00:53:04.696894 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 00:53:04.698214 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 00:53:04.698305 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 00:53:04.705112 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 00:53:04.706394 systemd-networkd[766]: eth0: DHCPv6 lease lost Oct 9 00:53:04.706415 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 00:53:04.708817 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 00:53:04.709058 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 00:53:04.711100 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 00:53:04.711158 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:53:04.716470 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 00:53:04.717947 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 00:53:04.718007 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 00:53:04.719598 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 00:53:04.719643 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:53:04.721076 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 00:53:04.721118 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 00:53:04.722939 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 00:53:04.722982 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:53:04.724535 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:53:04.733937 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 00:53:04.734045 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 00:53:04.742093 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 00:53:04.742265 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:53:04.744088 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 00:53:04.744131 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 00:53:04.746191 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 00:53:04.746227 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:53:04.747592 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 00:53:04.747637 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 00:53:04.749708 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 00:53:04.749755 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 00:53:04.751733 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 00:53:04.751773 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:53:04.764474 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 00:53:04.765249 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 00:53:04.765304 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:53:04.766932 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 00:53:04.766973 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:53:04.768359 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 00:53:04.768396 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:53:04.769982 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:53:04.770020 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:53:04.771636 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 00:53:04.771724 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 00:53:04.774703 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 00:53:04.777067 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 00:53:04.786583 systemd[1]: Switching root. Oct 9 00:53:04.809150 systemd-journald[238]: Journal stopped Oct 9 00:53:05.471283 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Oct 9 00:53:05.471359 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 00:53:05.471372 kernel: SELinux: policy capability open_perms=1 Oct 9 00:53:05.471382 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 00:53:05.471392 kernel: SELinux: policy capability always_check_network=0 Oct 9 00:53:05.471401 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 00:53:05.471411 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 00:53:05.471421 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 00:53:05.471430 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 00:53:05.471440 kernel: audit: type=1403 audit(1728435184.959:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 00:53:05.471455 systemd[1]: Successfully loaded SELinux policy in 32.188ms. Oct 9 00:53:05.471475 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.176ms. Oct 9 00:53:05.471486 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 00:53:05.471497 systemd[1]: Detected virtualization kvm. Oct 9 00:53:05.471507 systemd[1]: Detected architecture arm64. Oct 9 00:53:05.471517 systemd[1]: Detected first boot. Oct 9 00:53:05.471527 systemd[1]: Initializing machine ID from VM UUID. Oct 9 00:53:05.471538 zram_generator::config[1047]: No configuration found. Oct 9 00:53:05.471551 systemd[1]: Populated /etc with preset unit settings. Oct 9 00:53:05.471561 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 00:53:05.471571 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 00:53:05.471582 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 00:53:05.471592 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 00:53:05.471603 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 00:53:05.471613 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 00:53:05.471623 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 00:53:05.471634 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 00:53:05.471646 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 00:53:05.471656 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 00:53:05.471666 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 00:53:05.471677 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:53:05.471697 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:53:05.471708 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 00:53:05.471719 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 00:53:05.471730 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 00:53:05.471742 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 00:53:05.471753 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 9 00:53:05.471764 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:53:05.471774 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 00:53:05.471784 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 00:53:05.471798 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 00:53:05.471809 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 00:53:05.471820 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:53:05.471832 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 00:53:05.471842 systemd[1]: Reached target slices.target - Slice Units. Oct 9 00:53:05.471853 systemd[1]: Reached target swap.target - Swaps. Oct 9 00:53:05.471863 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 00:53:05.471874 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 00:53:05.471885 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:53:05.471895 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 00:53:05.471905 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:53:05.471916 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 00:53:05.471929 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 00:53:05.471940 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 00:53:05.471950 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 00:53:05.471961 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 00:53:05.471971 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 00:53:05.471981 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 00:53:05.471992 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 00:53:05.472002 systemd[1]: Reached target machines.target - Containers. Oct 9 00:53:05.472012 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 00:53:05.472024 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:53:05.472035 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 00:53:05.472045 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 00:53:05.472055 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:53:05.472065 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 00:53:05.472075 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:53:05.472085 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 00:53:05.472095 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:53:05.472107 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 00:53:05.472118 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 00:53:05.472129 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 00:53:05.472139 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 00:53:05.472150 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 00:53:05.472160 kernel: fuse: init (API version 7.39) Oct 9 00:53:05.472169 kernel: ACPI: bus type drm_connector registered Oct 9 00:53:05.472179 kernel: loop: module loaded Oct 9 00:53:05.472188 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 00:53:05.472201 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 00:53:05.472211 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 00:53:05.472221 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 00:53:05.472232 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 00:53:05.472242 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 00:53:05.472252 systemd[1]: Stopped verity-setup.service. Oct 9 00:53:05.472280 systemd-journald[1112]: Collecting audit messages is disabled. Oct 9 00:53:05.472306 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 00:53:05.472366 systemd-journald[1112]: Journal started Oct 9 00:53:05.472389 systemd-journald[1112]: Runtime Journal (/run/log/journal/feddafe1e1cc484abfaf2623a8a3c2fa) is 5.9M, max 47.3M, 41.4M free. Oct 9 00:53:05.304911 systemd[1]: Queued start job for default target multi-user.target. Oct 9 00:53:05.320700 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 00:53:05.321046 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 00:53:05.476352 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 00:53:05.475593 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 00:53:05.476508 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 00:53:05.477299 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 00:53:05.478174 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 00:53:05.479138 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 00:53:05.481359 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 00:53:05.482433 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:53:05.483570 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 00:53:05.483722 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 00:53:05.484851 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:53:05.484975 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:53:05.486041 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 00:53:05.486167 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 00:53:05.487222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:53:05.487532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:53:05.488634 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 00:53:05.488774 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 00:53:05.489965 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:53:05.490087 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:53:05.491123 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 00:53:05.492202 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 00:53:05.494644 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 00:53:05.506123 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 00:53:05.514404 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 00:53:05.516440 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 00:53:05.517215 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 00:53:05.517248 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 00:53:05.518988 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 00:53:05.520914 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 00:53:05.523525 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 00:53:05.524619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:53:05.526364 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 00:53:05.528150 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 00:53:05.529169 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:53:05.530605 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 00:53:05.531611 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:53:05.536075 systemd-journald[1112]: Time spent on flushing to /var/log/journal/feddafe1e1cc484abfaf2623a8a3c2fa is 18.580ms for 859 entries. Oct 9 00:53:05.536075 systemd-journald[1112]: System Journal (/var/log/journal/feddafe1e1cc484abfaf2623a8a3c2fa) is 8.0M, max 195.6M, 187.6M free. Oct 9 00:53:05.559582 systemd-journald[1112]: Received client request to flush runtime journal. Oct 9 00:53:05.559617 kernel: loop0: detected capacity change from 0 to 189592 Oct 9 00:53:05.535479 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:53:05.540543 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 00:53:05.542577 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 00:53:05.546357 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:53:05.547645 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 00:53:05.548796 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 00:53:05.550239 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 00:53:05.551457 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 00:53:05.554472 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 00:53:05.557698 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 00:53:05.560840 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 00:53:05.562781 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 00:53:05.578556 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 00:53:05.584846 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 00:53:05.587516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:53:05.589250 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Oct 9 00:53:05.589267 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Oct 9 00:53:05.593623 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:53:05.595397 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 00:53:05.595973 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 00:53:05.602556 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 00:53:05.611362 kernel: loop1: detected capacity change from 0 to 113456 Oct 9 00:53:05.629359 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 00:53:05.639455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 00:53:05.644711 kernel: loop2: detected capacity change from 0 to 116808 Oct 9 00:53:05.655366 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Oct 9 00:53:05.655386 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Oct 9 00:53:05.659689 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:53:05.666778 kernel: loop3: detected capacity change from 0 to 189592 Oct 9 00:53:05.672889 kernel: loop4: detected capacity change from 0 to 113456 Oct 9 00:53:05.675966 kernel: loop5: detected capacity change from 0 to 116808 Oct 9 00:53:05.678179 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 00:53:05.678581 (sd-merge)[1186]: Merged extensions into '/usr'. Oct 9 00:53:05.682485 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 00:53:05.682500 systemd[1]: Reloading... Oct 9 00:53:05.732497 zram_generator::config[1209]: No configuration found. Oct 9 00:53:05.812090 ldconfig[1153]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 00:53:05.826985 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:53:05.861501 systemd[1]: Reloading finished in 178 ms. Oct 9 00:53:05.892431 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 00:53:05.894390 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 00:53:05.907476 systemd[1]: Starting ensure-sysext.service... Oct 9 00:53:05.909079 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 00:53:05.918686 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Oct 9 00:53:05.918701 systemd[1]: Reloading... Oct 9 00:53:05.926753 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 00:53:05.927015 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 00:53:05.927654 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 00:53:05.927882 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Oct 9 00:53:05.927937 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Oct 9 00:53:05.930246 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 00:53:05.930259 systemd-tmpfiles[1247]: Skipping /boot Oct 9 00:53:05.937138 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 00:53:05.937157 systemd-tmpfiles[1247]: Skipping /boot Oct 9 00:53:05.968347 zram_generator::config[1277]: No configuration found. Oct 9 00:53:06.044174 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:53:06.078909 systemd[1]: Reloading finished in 159 ms. Oct 9 00:53:06.093109 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 00:53:06.105668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:53:06.113408 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:53:06.115607 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 00:53:06.117510 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 00:53:06.122626 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 00:53:06.127568 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:53:06.132586 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 00:53:06.137051 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:53:06.138190 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:53:06.143762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:53:06.145605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:53:06.146581 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:53:06.149694 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 00:53:06.151131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:53:06.151325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:53:06.152606 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:53:06.154343 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:53:06.157377 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 00:53:06.158899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:53:06.159048 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:53:06.166229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:53:06.176725 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:53:06.182400 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 00:53:06.186022 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Oct 9 00:53:06.186135 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:53:06.190588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:53:06.191532 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:53:06.193615 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 00:53:06.196280 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 00:53:06.197912 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 00:53:06.199465 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 00:53:06.201967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:53:06.202096 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:53:06.203520 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 00:53:06.203669 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 00:53:06.205014 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:53:06.205151 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:53:06.206547 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:53:06.206668 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:53:06.208100 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 00:53:06.209264 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:53:06.219734 systemd[1]: Finished ensure-sysext.service. Oct 9 00:53:06.227466 augenrules[1371]: No rules Oct 9 00:53:06.241709 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 00:53:06.243864 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:53:06.243942 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:53:06.245817 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1379) Oct 9 00:53:06.245877 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1383) Oct 9 00:53:06.248328 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1379) Oct 9 00:53:06.253546 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 00:53:06.254542 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 00:53:06.254982 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:53:06.255728 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:53:06.263757 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 9 00:53:06.283771 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 00:53:06.285562 systemd-resolved[1313]: Positive Trust Anchors: Oct 9 00:53:06.285637 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 00:53:06.288354 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 00:53:06.290519 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 00:53:06.295145 systemd-resolved[1313]: Defaulting to hostname 'linux'. Oct 9 00:53:06.302193 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 00:53:06.303153 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:53:06.315810 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 00:53:06.350722 systemd-networkd[1384]: lo: Link UP Oct 9 00:53:06.350734 systemd-networkd[1384]: lo: Gained carrier Oct 9 00:53:06.351725 systemd-networkd[1384]: Enumeration completed Oct 9 00:53:06.357557 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:53:06.358551 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 00:53:06.358709 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:53:06.358789 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 00:53:06.359612 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 00:53:06.359791 systemd-networkd[1384]: eth0: Link UP Oct 9 00:53:06.359865 systemd-networkd[1384]: eth0: Gained carrier Oct 9 00:53:06.359913 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:53:06.360813 systemd[1]: Reached target network.target - Network. Oct 9 00:53:06.361602 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 00:53:06.363646 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 00:53:06.369856 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 00:53:06.372660 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 00:53:06.386397 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 00:53:06.387729 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 00:53:06.387056 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Oct 9 00:53:06.387926 systemd-timesyncd[1388]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 00:53:06.387977 systemd-timesyncd[1388]: Initial clock synchronization to Wed 2024-10-09 00:53:06.545694 UTC. Oct 9 00:53:06.399999 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:53:06.413656 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 00:53:06.414787 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:53:06.415623 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 00:53:06.416482 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 00:53:06.417371 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 00:53:06.418417 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 00:53:06.419284 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 00:53:06.420180 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 00:53:06.421086 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 00:53:06.421121 systemd[1]: Reached target paths.target - Path Units. Oct 9 00:53:06.421788 systemd[1]: Reached target timers.target - Timer Units. Oct 9 00:53:06.423296 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 00:53:06.425472 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 00:53:06.439281 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 00:53:06.441204 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 00:53:06.442504 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 00:53:06.443364 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 00:53:06.444037 systemd[1]: Reached target basic.target - Basic System. Oct 9 00:53:06.444789 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 00:53:06.444819 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 00:53:06.445731 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 00:53:06.447440 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 00:53:06.450465 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 00:53:06.450478 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 00:53:06.454508 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 00:53:06.456184 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 00:53:06.460467 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 00:53:06.463516 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 00:53:06.464021 jq[1417]: false Oct 9 00:53:06.465176 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 00:53:06.469539 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 00:53:06.474091 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 00:53:06.482183 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 00:53:06.482601 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 00:53:06.483542 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 00:53:06.485822 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 00:53:06.487159 extend-filesystems[1418]: Found loop3 Oct 9 00:53:06.491870 extend-filesystems[1418]: Found loop4 Oct 9 00:53:06.491870 extend-filesystems[1418]: Found loop5 Oct 9 00:53:06.491870 extend-filesystems[1418]: Found vda Oct 9 00:53:06.491870 extend-filesystems[1418]: Found vda1 Oct 9 00:53:06.491870 extend-filesystems[1418]: Found vda2 Oct 9 00:53:06.491870 extend-filesystems[1418]: Found vda3 Oct 9 00:53:06.491870 extend-filesystems[1418]: Found usr Oct 9 00:53:06.491870 extend-filesystems[1418]: Found vda4 Oct 9 00:53:06.491870 extend-filesystems[1418]: Found vda6 Oct 9 00:53:06.491870 extend-filesystems[1418]: Found vda7 Oct 9 00:53:06.491870 extend-filesystems[1418]: Found vda9 Oct 9 00:53:06.491870 extend-filesystems[1418]: Checking size of /dev/vda9 Oct 9 00:53:06.488587 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 00:53:06.523686 extend-filesystems[1418]: Resized partition /dev/vda9 Oct 9 00:53:06.494918 dbus-daemon[1416]: [system] SELinux support is enabled Oct 9 00:53:06.493750 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 00:53:06.526638 jq[1431]: true Oct 9 00:53:06.527268 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) Oct 9 00:53:06.493937 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 00:53:06.494194 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 00:53:06.494565 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 00:53:06.530581 update_engine[1430]: I20241009 00:53:06.529588 1430 main.cc:92] Flatcar Update Engine starting Oct 9 00:53:06.498458 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 00:53:06.502728 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 00:53:06.502905 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 00:53:06.516432 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 00:53:06.516474 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 00:53:06.518083 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 00:53:06.518125 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 00:53:06.522809 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 00:53:06.532342 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 00:53:06.532996 jq[1439]: true Oct 9 00:53:06.538168 systemd[1]: Started update-engine.service - Update Engine. Oct 9 00:53:06.539480 update_engine[1430]: I20241009 00:53:06.538295 1430 update_check_scheduler.cc:74] Next update check in 11m48s Oct 9 00:53:06.541654 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 00:53:06.548364 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 00:53:06.548410 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1373) Oct 9 00:53:06.549596 tar[1438]: linux-arm64/helm Oct 9 00:53:06.563570 systemd-logind[1429]: Watching system buttons on /dev/input/event0 (Power Button) Oct 9 00:53:06.571654 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 00:53:06.571654 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 00:53:06.571654 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 00:53:06.565098 systemd-logind[1429]: New seat seat0. Oct 9 00:53:06.592371 extend-filesystems[1418]: Resized filesystem in /dev/vda9 Oct 9 00:53:06.569471 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 00:53:06.571350 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 00:53:06.583807 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 00:53:06.617323 bash[1472]: Updated "/home/core/.ssh/authorized_keys" Oct 9 00:53:06.618893 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 00:53:06.621947 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 00:53:06.632742 locksmithd[1455]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 00:53:06.739805 containerd[1440]: time="2024-10-09T00:53:06.739707480Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 00:53:06.766624 containerd[1440]: time="2024-10-09T00:53:06.766584840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:53:06.768364 containerd[1440]: time="2024-10-09T00:53:06.767988040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:53:06.768364 containerd[1440]: time="2024-10-09T00:53:06.768021920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 00:53:06.768364 containerd[1440]: time="2024-10-09T00:53:06.768038560Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 00:53:06.768364 containerd[1440]: time="2024-10-09T00:53:06.768172160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 00:53:06.768364 containerd[1440]: time="2024-10-09T00:53:06.768191680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 00:53:06.768364 containerd[1440]: time="2024-10-09T00:53:06.768239440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:53:06.768364 containerd[1440]: time="2024-10-09T00:53:06.768250320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:53:06.768555 containerd[1440]: time="2024-10-09T00:53:06.768416720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:53:06.768555 containerd[1440]: time="2024-10-09T00:53:06.768432160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 00:53:06.768555 containerd[1440]: time="2024-10-09T00:53:06.768444920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:53:06.768555 containerd[1440]: time="2024-10-09T00:53:06.768459680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 00:53:06.768555 containerd[1440]: time="2024-10-09T00:53:06.768538720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:53:06.768789 containerd[1440]: time="2024-10-09T00:53:06.768762440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:53:06.768889 containerd[1440]: time="2024-10-09T00:53:06.768872840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:53:06.768910 containerd[1440]: time="2024-10-09T00:53:06.768890680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 00:53:06.768980 containerd[1440]: time="2024-10-09T00:53:06.768967120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 00:53:06.769023 containerd[1440]: time="2024-10-09T00:53:06.769011520Z" level=info msg="metadata content store policy set" policy=shared Oct 9 00:53:06.772025 containerd[1440]: time="2024-10-09T00:53:06.771984160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 00:53:06.772077 containerd[1440]: time="2024-10-09T00:53:06.772037080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 00:53:06.772077 containerd[1440]: time="2024-10-09T00:53:06.772052200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 00:53:06.772077 containerd[1440]: time="2024-10-09T00:53:06.772066360Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 00:53:06.772144 containerd[1440]: time="2024-10-09T00:53:06.772079480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 00:53:06.772226 containerd[1440]: time="2024-10-09T00:53:06.772205000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 00:53:06.772728 containerd[1440]: time="2024-10-09T00:53:06.772693240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 00:53:06.772891 containerd[1440]: time="2024-10-09T00:53:06.772869240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 00:53:06.772926 containerd[1440]: time="2024-10-09T00:53:06.772896400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 00:53:06.772926 containerd[1440]: time="2024-10-09T00:53:06.772912760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 00:53:06.772964 containerd[1440]: time="2024-10-09T00:53:06.772926360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 00:53:06.773018 containerd[1440]: time="2024-10-09T00:53:06.772940160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 00:53:06.773042 containerd[1440]: time="2024-10-09T00:53:06.773023240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 00:53:06.773042 containerd[1440]: time="2024-10-09T00:53:06.773038400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 00:53:06.773075 containerd[1440]: time="2024-10-09T00:53:06.773052240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 00:53:06.773075 containerd[1440]: time="2024-10-09T00:53:06.773064280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 00:53:06.773117 containerd[1440]: time="2024-10-09T00:53:06.773076520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 00:53:06.773117 containerd[1440]: time="2024-10-09T00:53:06.773088680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 00:53:06.773150 containerd[1440]: time="2024-10-09T00:53:06.773115840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773191 containerd[1440]: time="2024-10-09T00:53:06.773176200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773215 containerd[1440]: time="2024-10-09T00:53:06.773196840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773215 containerd[1440]: time="2024-10-09T00:53:06.773208960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773254 containerd[1440]: time="2024-10-09T00:53:06.773219960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773254 containerd[1440]: time="2024-10-09T00:53:06.773232680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773254 containerd[1440]: time="2024-10-09T00:53:06.773243200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773303 containerd[1440]: time="2024-10-09T00:53:06.773255080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773303 containerd[1440]: time="2024-10-09T00:53:06.773268280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773303 containerd[1440]: time="2024-10-09T00:53:06.773282600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773303 containerd[1440]: time="2024-10-09T00:53:06.773293640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773405 containerd[1440]: time="2024-10-09T00:53:06.773306320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773405 containerd[1440]: time="2024-10-09T00:53:06.773339280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773405 containerd[1440]: time="2024-10-09T00:53:06.773354000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 00:53:06.773455 containerd[1440]: time="2024-10-09T00:53:06.773374040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773455 containerd[1440]: time="2024-10-09T00:53:06.773442240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.773455 containerd[1440]: time="2024-10-09T00:53:06.773453000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 00:53:06.774499 containerd[1440]: time="2024-10-09T00:53:06.774344480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 00:53:06.774529 containerd[1440]: time="2024-10-09T00:53:06.774503600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 00:53:06.774550 containerd[1440]: time="2024-10-09T00:53:06.774536080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 00:53:06.774568 containerd[1440]: time="2024-10-09T00:53:06.774549840Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 00:53:06.774568 containerd[1440]: time="2024-10-09T00:53:06.774559240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.774614 containerd[1440]: time="2024-10-09T00:53:06.774572120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 00:53:06.774614 containerd[1440]: time="2024-10-09T00:53:06.774582080Z" level=info msg="NRI interface is disabled by configuration." Oct 9 00:53:06.774614 containerd[1440]: time="2024-10-09T00:53:06.774591440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 00:53:06.774985 containerd[1440]: time="2024-10-09T00:53:06.774930360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 00:53:06.775205 containerd[1440]: time="2024-10-09T00:53:06.774984120Z" level=info msg="Connect containerd service" Oct 9 00:53:06.775205 containerd[1440]: time="2024-10-09T00:53:06.775013920Z" level=info msg="using legacy CRI server" Oct 9 00:53:06.775205 containerd[1440]: time="2024-10-09T00:53:06.775020240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 00:53:06.775205 containerd[1440]: time="2024-10-09T00:53:06.775112080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 00:53:06.776191 containerd[1440]: time="2024-10-09T00:53:06.776153080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 00:53:06.776414 containerd[1440]: time="2024-10-09T00:53:06.776382400Z" level=info msg="Start subscribing containerd event" Oct 9 00:53:06.776717 containerd[1440]: time="2024-10-09T00:53:06.776690440Z" level=info msg="Start recovering state" Oct 9 00:53:06.777040 containerd[1440]: time="2024-10-09T00:53:06.777002760Z" level=info msg="Start event monitor" Oct 9 00:53:06.777040 containerd[1440]: time="2024-10-09T00:53:06.777035240Z" level=info msg="Start snapshots syncer" Oct 9 00:53:06.777087 containerd[1440]: time="2024-10-09T00:53:06.777046120Z" level=info msg="Start cni network conf syncer for default" Oct 9 00:53:06.777087 containerd[1440]: time="2024-10-09T00:53:06.777053400Z" level=info msg="Start streaming server" Oct 9 00:53:06.777471 containerd[1440]: time="2024-10-09T00:53:06.777447800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 00:53:06.777520 containerd[1440]: time="2024-10-09T00:53:06.777506560Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 00:53:06.778776 containerd[1440]: time="2024-10-09T00:53:06.777801880Z" level=info msg="containerd successfully booted in 0.039784s" Oct 9 00:53:06.777867 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 00:53:06.913200 tar[1438]: linux-arm64/LICENSE Oct 9 00:53:06.913200 tar[1438]: linux-arm64/README.md Oct 9 00:53:06.923354 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 00:53:06.967354 sshd_keygen[1434]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 00:53:06.985549 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 00:53:06.998558 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 00:53:07.003946 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 00:53:07.004169 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 00:53:07.008542 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 00:53:07.020430 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 00:53:07.031614 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 00:53:07.033581 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 9 00:53:07.034747 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 00:53:08.130944 systemd-networkd[1384]: eth0: Gained IPv6LL Oct 9 00:53:08.133982 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 00:53:08.135496 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 00:53:08.148541 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 00:53:08.150635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:53:08.152456 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 00:53:08.166848 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 00:53:08.167036 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 00:53:08.168413 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 00:53:08.174434 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 00:53:08.643551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:53:08.644807 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 00:53:08.647480 (kubelet)[1529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:53:08.650450 systemd[1]: Startup finished in 525ms (kernel) + 5.262s (initrd) + 3.723s (userspace) = 9.511s. Oct 9 00:53:09.097938 kubelet[1529]: E1009 00:53:09.097784 1529 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:53:09.100488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:53:09.100633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:53:12.268980 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 00:53:12.270149 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:56178.service - OpenSSH per-connection server daemon (10.0.0.1:56178). Oct 9 00:53:12.317411 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 56178 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:53:12.319120 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:53:12.338243 systemd-logind[1429]: New session 1 of user core. Oct 9 00:53:12.339262 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 00:53:12.351561 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 00:53:12.361814 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 00:53:12.365975 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 00:53:12.371997 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 00:53:12.445356 systemd[1547]: Queued start job for default target default.target. Oct 9 00:53:12.457250 systemd[1547]: Created slice app.slice - User Application Slice. Oct 9 00:53:12.457305 systemd[1547]: Reached target paths.target - Paths. Oct 9 00:53:12.457341 systemd[1547]: Reached target timers.target - Timers. Oct 9 00:53:12.458586 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 00:53:12.469642 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 00:53:12.469748 systemd[1547]: Reached target sockets.target - Sockets. Oct 9 00:53:12.469761 systemd[1547]: Reached target basic.target - Basic System. Oct 9 00:53:12.469793 systemd[1547]: Reached target default.target - Main User Target. Oct 9 00:53:12.469818 systemd[1547]: Startup finished in 92ms. Oct 9 00:53:12.470406 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 00:53:12.473267 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 00:53:12.547279 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:39514.service - OpenSSH per-connection server daemon (10.0.0.1:39514). Oct 9 00:53:12.582704 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 39514 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:53:12.584308 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:53:12.588227 systemd-logind[1429]: New session 2 of user core. Oct 9 00:53:12.600502 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 00:53:12.654623 sshd[1558]: pam_unix(sshd:session): session closed for user core Oct 9 00:53:12.666609 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:39514.service: Deactivated successfully. Oct 9 00:53:12.668537 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 00:53:12.670018 systemd-logind[1429]: Session 2 logged out. Waiting for processes to exit. Oct 9 00:53:12.671244 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:39518.service - OpenSSH per-connection server daemon (10.0.0.1:39518). Oct 9 00:53:12.671950 systemd-logind[1429]: Removed session 2. Oct 9 00:53:12.708524 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 39518 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:53:12.710121 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:53:12.714141 systemd-logind[1429]: New session 3 of user core. Oct 9 00:53:12.721476 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 00:53:12.770693 sshd[1565]: pam_unix(sshd:session): session closed for user core Oct 9 00:53:12.783769 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:39518.service: Deactivated successfully. Oct 9 00:53:12.785242 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 00:53:12.787610 systemd-logind[1429]: Session 3 logged out. Waiting for processes to exit. Oct 9 00:53:12.796573 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:39520.service - OpenSSH per-connection server daemon (10.0.0.1:39520). Oct 9 00:53:12.797429 systemd-logind[1429]: Removed session 3. Oct 9 00:53:12.827565 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 39520 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:53:12.829078 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:53:12.833262 systemd-logind[1429]: New session 4 of user core. Oct 9 00:53:12.846488 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 00:53:12.899860 sshd[1572]: pam_unix(sshd:session): session closed for user core Oct 9 00:53:12.912784 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:39520.service: Deactivated successfully. Oct 9 00:53:12.914256 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 00:53:12.916641 systemd-logind[1429]: Session 4 logged out. Waiting for processes to exit. Oct 9 00:53:12.918104 systemd-logind[1429]: Removed session 4. Oct 9 00:53:12.919970 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:39526.service - OpenSSH per-connection server daemon (10.0.0.1:39526). Oct 9 00:53:12.953547 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 39526 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:53:12.954788 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:53:12.958549 systemd-logind[1429]: New session 5 of user core. Oct 9 00:53:12.969497 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 00:53:13.035783 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 00:53:13.036068 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:53:13.053074 sudo[1582]: pam_unix(sudo:session): session closed for user root Oct 9 00:53:13.054759 sshd[1579]: pam_unix(sshd:session): session closed for user core Oct 9 00:53:13.072708 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:39526.service: Deactivated successfully. Oct 9 00:53:13.074424 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 00:53:13.075596 systemd-logind[1429]: Session 5 logged out. Waiting for processes to exit. Oct 9 00:53:13.088585 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:39540.service - OpenSSH per-connection server daemon (10.0.0.1:39540). Oct 9 00:53:13.089345 systemd-logind[1429]: Removed session 5. Oct 9 00:53:13.119718 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 39540 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:53:13.121104 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:53:13.124971 systemd-logind[1429]: New session 6 of user core. Oct 9 00:53:13.136534 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 00:53:13.191351 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 00:53:13.192042 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:53:13.195079 sudo[1591]: pam_unix(sudo:session): session closed for user root Oct 9 00:53:13.202607 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 00:53:13.202982 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:53:13.217177 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:53:13.241538 augenrules[1613]: No rules Oct 9 00:53:13.242261 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:53:13.242449 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:53:13.244090 sudo[1590]: pam_unix(sudo:session): session closed for user root Oct 9 00:53:13.246005 sshd[1587]: pam_unix(sshd:session): session closed for user core Oct 9 00:53:13.254614 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:39540.service: Deactivated successfully. Oct 9 00:53:13.256045 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 00:53:13.257904 systemd-logind[1429]: Session 6 logged out. Waiting for processes to exit. Oct 9 00:53:13.272679 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:39544.service - OpenSSH per-connection server daemon (10.0.0.1:39544). Oct 9 00:53:13.273793 systemd-logind[1429]: Removed session 6. Oct 9 00:53:13.304340 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 39544 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:53:13.306423 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:53:13.310505 systemd-logind[1429]: New session 7 of user core. Oct 9 00:53:13.321477 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 00:53:13.372338 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 00:53:13.372619 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:53:13.700548 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 00:53:13.700630 (dockerd)[1645]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 00:53:13.937712 dockerd[1645]: time="2024-10-09T00:53:13.937655420Z" level=info msg="Starting up" Oct 9 00:53:14.074971 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4009196134-merged.mount: Deactivated successfully. Oct 9 00:53:14.094838 dockerd[1645]: time="2024-10-09T00:53:14.094793476Z" level=info msg="Loading containers: start." Oct 9 00:53:14.234434 kernel: Initializing XFRM netlink socket Oct 9 00:53:14.295786 systemd-networkd[1384]: docker0: Link UP Oct 9 00:53:14.335562 dockerd[1645]: time="2024-10-09T00:53:14.335468673Z" level=info msg="Loading containers: done." Oct 9 00:53:14.348522 dockerd[1645]: time="2024-10-09T00:53:14.348476889Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 00:53:14.348636 dockerd[1645]: time="2024-10-09T00:53:14.348560693Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 00:53:14.348705 dockerd[1645]: time="2024-10-09T00:53:14.348653084Z" level=info msg="Daemon has completed initialization" Oct 9 00:53:14.377077 dockerd[1645]: time="2024-10-09T00:53:14.377030912Z" level=info msg="API listen on /run/docker.sock" Oct 9 00:53:14.377793 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 00:53:14.806379 containerd[1440]: time="2024-10-09T00:53:14.806233881Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 9 00:53:15.072920 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1913190490-merged.mount: Deactivated successfully. Oct 9 00:53:15.500299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3986310297.mount: Deactivated successfully. Oct 9 00:53:16.406648 containerd[1440]: time="2024-10-09T00:53:16.406590276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:16.407112 containerd[1440]: time="2024-10-09T00:53:16.407076745Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=25691523" Oct 9 00:53:16.407858 containerd[1440]: time="2024-10-09T00:53:16.407808661Z" level=info msg="ImageCreate event name:\"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:16.410670 containerd[1440]: time="2024-10-09T00:53:16.410638389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:16.412095 containerd[1440]: time="2024-10-09T00:53:16.411940105Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"25688321\" in 1.605659115s" Oct 9 00:53:16.412095 containerd[1440]: time="2024-10-09T00:53:16.411978814Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\"" Oct 9 00:53:16.412684 containerd[1440]: time="2024-10-09T00:53:16.412660272Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 9 00:53:17.535166 containerd[1440]: time="2024-10-09T00:53:17.535115910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:17.535611 containerd[1440]: time="2024-10-09T00:53:17.535576328Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=22460088" Oct 9 00:53:17.536546 containerd[1440]: time="2024-10-09T00:53:17.536506451Z" level=info msg="ImageCreate event name:\"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:17.541440 containerd[1440]: time="2024-10-09T00:53:17.541377726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:17.543218 containerd[1440]: time="2024-10-09T00:53:17.543180073Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"23947353\" in 1.130422714s" Oct 9 00:53:17.543263 containerd[1440]: time="2024-10-09T00:53:17.543220562Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\"" Oct 9 00:53:17.543902 containerd[1440]: time="2024-10-09T00:53:17.543875022Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 9 00:53:18.570719 containerd[1440]: time="2024-10-09T00:53:18.570661183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:18.571120 containerd[1440]: time="2024-10-09T00:53:18.571071880Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=17018560" Oct 9 00:53:18.571996 containerd[1440]: time="2024-10-09T00:53:18.571961181Z" level=info msg="ImageCreate event name:\"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:18.574937 containerd[1440]: time="2024-10-09T00:53:18.574911882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:18.577159 containerd[1440]: time="2024-10-09T00:53:18.577039381Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"18505843\" in 1.033035781s" Oct 9 00:53:18.577159 containerd[1440]: time="2024-10-09T00:53:18.577076549Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\"" Oct 9 00:53:18.577709 containerd[1440]: time="2024-10-09T00:53:18.577663242Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 9 00:53:19.339461 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 00:53:19.346500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:53:19.435711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:53:19.439352 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:53:19.478891 kubelet[1917]: E1009 00:53:19.478840 1917 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:53:19.481557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:53:19.481689 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:53:19.599037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3880384084.mount: Deactivated successfully. Oct 9 00:53:20.149552 containerd[1440]: time="2024-10-09T00:53:20.149498730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:20.150472 containerd[1440]: time="2024-10-09T00:53:20.150252221Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=26753317" Oct 9 00:53:20.151166 containerd[1440]: time="2024-10-09T00:53:20.151106141Z" level=info msg="ImageCreate event name:\"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:20.153535 containerd[1440]: time="2024-10-09T00:53:20.153321217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:20.154110 containerd[1440]: time="2024-10-09T00:53:20.153993466Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"26752334\" in 1.576290414s" Oct 9 00:53:20.154110 containerd[1440]: time="2024-10-09T00:53:20.154024654Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\"" Oct 9 00:53:20.154492 containerd[1440]: time="2024-10-09T00:53:20.154461729Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 00:53:20.783311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1063808140.mount: Deactivated successfully. Oct 9 00:53:21.437584 containerd[1440]: time="2024-10-09T00:53:21.437534263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:21.438636 containerd[1440]: time="2024-10-09T00:53:21.438568280Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 9 00:53:21.439537 containerd[1440]: time="2024-10-09T00:53:21.439503958Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:21.443335 containerd[1440]: time="2024-10-09T00:53:21.443056695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:21.445017 containerd[1440]: time="2024-10-09T00:53:21.444864860Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.290373553s" Oct 9 00:53:21.445017 containerd[1440]: time="2024-10-09T00:53:21.444904741Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 9 00:53:21.445501 containerd[1440]: time="2024-10-09T00:53:21.445345157Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 9 00:53:21.886962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount885994088.mount: Deactivated successfully. Oct 9 00:53:21.891676 containerd[1440]: time="2024-10-09T00:53:21.891629433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:21.892041 containerd[1440]: time="2024-10-09T00:53:21.892010749Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 9 00:53:21.892904 containerd[1440]: time="2024-10-09T00:53:21.892857959Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:21.895139 containerd[1440]: time="2024-10-09T00:53:21.895108667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:21.896357 containerd[1440]: time="2024-10-09T00:53:21.895839283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 450.466201ms" Oct 9 00:53:21.896357 containerd[1440]: time="2024-10-09T00:53:21.895870698Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 9 00:53:21.896357 containerd[1440]: time="2024-10-09T00:53:21.896300201Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 9 00:53:22.446582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2314418711.mount: Deactivated successfully. Oct 9 00:53:24.153268 containerd[1440]: time="2024-10-09T00:53:24.152541965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:24.153268 containerd[1440]: time="2024-10-09T00:53:24.152928431Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=65868194" Oct 9 00:53:24.153876 containerd[1440]: time="2024-10-09T00:53:24.153836919Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:24.157788 containerd[1440]: time="2024-10-09T00:53:24.157174789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:24.159599 containerd[1440]: time="2024-10-09T00:53:24.159564050Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.263221329s" Oct 9 00:53:24.159599 containerd[1440]: time="2024-10-09T00:53:24.159597678Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Oct 9 00:53:29.597598 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 00:53:29.604541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:53:29.614396 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 00:53:29.614470 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 00:53:29.614692 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:53:29.631601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:53:29.646102 systemd[1]: Reloading requested from client PID 2065 ('systemctl') (unit session-7.scope)... Oct 9 00:53:29.646116 systemd[1]: Reloading... Oct 9 00:53:29.712994 zram_generator::config[2105]: No configuration found. Oct 9 00:53:29.834170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:53:29.884146 systemd[1]: Reloading finished in 237 ms. Oct 9 00:53:29.917594 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 00:53:29.917656 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 00:53:29.917852 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:53:29.920068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:53:30.005126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:53:30.008836 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 00:53:30.048213 kubelet[2150]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:53:30.048213 kubelet[2150]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 00:53:30.048213 kubelet[2150]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:53:30.048556 kubelet[2150]: I1009 00:53:30.048354 2150 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 00:53:31.301616 kubelet[2150]: I1009 00:53:31.301571 2150 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 9 00:53:31.301616 kubelet[2150]: I1009 00:53:31.301604 2150 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 00:53:31.301978 kubelet[2150]: I1009 00:53:31.301856 2150 server.go:929] "Client rotation is on, will bootstrap in background" Oct 9 00:53:31.351929 kubelet[2150]: E1009 00:53:31.351893 2150 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:53:31.352443 kubelet[2150]: I1009 00:53:31.352430 2150 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 00:53:31.357968 kubelet[2150]: E1009 00:53:31.357923 2150 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 9 00:53:31.357968 kubelet[2150]: I1009 00:53:31.357959 2150 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 9 00:53:31.361210 kubelet[2150]: I1009 00:53:31.361183 2150 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 00:53:31.363835 kubelet[2150]: I1009 00:53:31.363807 2150 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 9 00:53:31.363983 kubelet[2150]: I1009 00:53:31.363939 2150 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 00:53:31.364132 kubelet[2150]: I1009 00:53:31.363976 2150 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 9 00:53:31.364278 kubelet[2150]: I1009 00:53:31.364260 2150 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 00:53:31.364278 kubelet[2150]: I1009 00:53:31.364272 2150 container_manager_linux.go:300] "Creating device plugin manager" Oct 9 00:53:31.364477 kubelet[2150]: I1009 00:53:31.364459 2150 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:53:31.366874 kubelet[2150]: I1009 00:53:31.366844 2150 kubelet.go:408] "Attempting to sync node with API server" Oct 9 00:53:31.366874 kubelet[2150]: I1009 00:53:31.366876 2150 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 00:53:31.366987 kubelet[2150]: I1009 00:53:31.366973 2150 kubelet.go:314] "Adding apiserver pod source" Oct 9 00:53:31.366987 kubelet[2150]: I1009 00:53:31.366986 2150 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 00:53:31.369288 kubelet[2150]: W1009 00:53:31.369233 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Oct 9 00:53:31.369320 kubelet[2150]: E1009 00:53:31.369293 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:53:31.369466 kubelet[2150]: W1009 00:53:31.369407 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Oct 9 00:53:31.369506 kubelet[2150]: E1009 00:53:31.369471 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:53:31.369788 kubelet[2150]: I1009 00:53:31.369759 2150 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 00:53:31.371843 kubelet[2150]: I1009 00:53:31.371812 2150 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 00:53:31.373715 kubelet[2150]: W1009 00:53:31.373684 2150 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 00:53:31.374444 kubelet[2150]: I1009 00:53:31.374420 2150 server.go:1269] "Started kubelet" Oct 9 00:53:31.375096 kubelet[2150]: I1009 00:53:31.375041 2150 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 00:53:31.375387 kubelet[2150]: I1009 00:53:31.375363 2150 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 00:53:31.375512 kubelet[2150]: I1009 00:53:31.375488 2150 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 00:53:31.376466 kubelet[2150]: I1009 00:53:31.376438 2150 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 00:53:31.376849 kubelet[2150]: I1009 00:53:31.376826 2150 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 9 00:53:31.377865 kubelet[2150]: I1009 00:53:31.377662 2150 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 9 00:53:31.378151 kubelet[2150]: I1009 00:53:31.378111 2150 server.go:460] "Adding debug handlers to kubelet server" Oct 9 00:53:31.379482 kubelet[2150]: I1009 00:53:31.379030 2150 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 9 00:53:31.379482 kubelet[2150]: I1009 00:53:31.379140 2150 reconciler.go:26] "Reconciler: start to sync state" Oct 9 00:53:31.379482 kubelet[2150]: E1009 00:53:31.379173 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:53:31.379871 kubelet[2150]: W1009 00:53:31.379805 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Oct 9 00:53:31.379918 kubelet[2150]: E1009 00:53:31.379871 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:53:31.379961 kubelet[2150]: E1009 00:53:31.379935 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Oct 9 00:53:31.380131 kubelet[2150]: I1009 00:53:31.380103 2150 factory.go:221] Registration of the systemd container factory successfully Oct 9 00:53:31.380228 kubelet[2150]: I1009 00:53:31.380204 2150 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 00:53:31.382371 kubelet[2150]: I1009 00:53:31.382275 2150 factory.go:221] Registration of the containerd container factory successfully Oct 9 00:53:31.383119 kubelet[2150]: E1009 00:53:31.382191 2150 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fca2aac73f9293 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 00:53:31.374400147 +0000 UTC m=+1.362527458,LastTimestamp:2024-10-09 00:53:31.374400147 +0000 UTC m=+1.362527458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 00:53:31.393261 kubelet[2150]: I1009 00:53:31.393221 2150 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 00:53:31.393261 kubelet[2150]: I1009 00:53:31.393235 2150 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 00:53:31.393261 kubelet[2150]: I1009 00:53:31.393250 2150 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:53:31.394721 kubelet[2150]: I1009 00:53:31.394660 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 00:53:31.395757 kubelet[2150]: I1009 00:53:31.395735 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 00:53:31.395757 kubelet[2150]: I1009 00:53:31.395756 2150 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 00:53:31.395817 kubelet[2150]: I1009 00:53:31.395774 2150 kubelet.go:2321] "Starting kubelet main sync loop" Oct 9 00:53:31.395817 kubelet[2150]: E1009 00:53:31.395811 2150 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 00:53:31.455335 kubelet[2150]: I1009 00:53:31.455242 2150 policy_none.go:49] "None policy: Start" Oct 9 00:53:31.455905 kubelet[2150]: W1009 00:53:31.455861 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Oct 9 00:53:31.455943 kubelet[2150]: E1009 00:53:31.455921 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:53:31.456164 kubelet[2150]: I1009 00:53:31.456147 2150 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 00:53:31.456200 kubelet[2150]: I1009 00:53:31.456176 2150 state_mem.go:35] "Initializing new in-memory state store" Oct 9 00:53:31.463261 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 00:53:31.479857 kubelet[2150]: E1009 00:53:31.479833 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:53:31.481578 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 00:53:31.484013 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 00:53:31.496752 kubelet[2150]: E1009 00:53:31.496718 2150 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 00:53:31.497037 kubelet[2150]: I1009 00:53:31.496999 2150 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 00:53:31.497384 kubelet[2150]: I1009 00:53:31.497172 2150 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 9 00:53:31.497384 kubelet[2150]: I1009 00:53:31.497189 2150 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 00:53:31.498027 kubelet[2150]: I1009 00:53:31.497640 2150 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 00:53:31.498687 kubelet[2150]: E1009 00:53:31.498665 2150 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 00:53:31.580453 kubelet[2150]: E1009 00:53:31.580366 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Oct 9 00:53:31.599265 kubelet[2150]: I1009 00:53:31.599213 2150 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 00:53:31.599616 kubelet[2150]: E1009 00:53:31.599579 2150 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Oct 9 00:53:31.704047 systemd[1]: Created slice kubepods-burstable-pod06488c00442307e618ae47aa5e692126.slice - libcontainer container kubepods-burstable-pod06488c00442307e618ae47aa5e692126.slice. Oct 9 00:53:31.726291 systemd[1]: Created slice kubepods-burstable-pod344660bab292c4b91cf719f133c08ba2.slice - libcontainer container kubepods-burstable-pod344660bab292c4b91cf719f133c08ba2.slice. Oct 9 00:53:31.740286 systemd[1]: Created slice kubepods-burstable-pod1510be5a54dc8eef4f27b06886c891dc.slice - libcontainer container kubepods-burstable-pod1510be5a54dc8eef4f27b06886c891dc.slice. Oct 9 00:53:31.782026 kubelet[2150]: I1009 00:53:31.781990 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06488c00442307e618ae47aa5e692126-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06488c00442307e618ae47aa5e692126\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:53:31.782026 kubelet[2150]: I1009 00:53:31.782030 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:53:31.782131 kubelet[2150]: I1009 00:53:31.782049 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:53:31.782131 kubelet[2150]: I1009 00:53:31.782064 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:53:31.782131 kubelet[2150]: I1009 00:53:31.782079 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:53:31.782131 kubelet[2150]: I1009 00:53:31.782094 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1510be5a54dc8eef4f27b06886c891dc-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"1510be5a54dc8eef4f27b06886c891dc\") " pod="kube-system/kube-scheduler-localhost" Oct 9 00:53:31.782131 kubelet[2150]: I1009 00:53:31.782110 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06488c00442307e618ae47aa5e692126-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06488c00442307e618ae47aa5e692126\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:53:31.782235 kubelet[2150]: I1009 00:53:31.782124 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:53:31.782235 kubelet[2150]: I1009 00:53:31.782138 2150 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06488c00442307e618ae47aa5e692126-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06488c00442307e618ae47aa5e692126\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:53:31.801008 kubelet[2150]: I1009 00:53:31.800988 2150 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 00:53:31.801344 kubelet[2150]: E1009 00:53:31.801290 2150 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Oct 9 00:53:31.981048 kubelet[2150]: E1009 00:53:31.980991 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Oct 9 00:53:32.024468 kubelet[2150]: E1009 00:53:32.024380 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:32.025129 containerd[1440]: time="2024-10-09T00:53:32.024972069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06488c00442307e618ae47aa5e692126,Namespace:kube-system,Attempt:0,}" Oct 9 00:53:32.039434 kubelet[2150]: E1009 00:53:32.039410 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:32.039775 containerd[1440]: time="2024-10-09T00:53:32.039748411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:344660bab292c4b91cf719f133c08ba2,Namespace:kube-system,Attempt:0,}" Oct 9 00:53:32.043030 kubelet[2150]: E1009 00:53:32.043006 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:32.043529 containerd[1440]: time="2024-10-09T00:53:32.043298536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:1510be5a54dc8eef4f27b06886c891dc,Namespace:kube-system,Attempt:0,}" Oct 9 00:53:32.202537 kubelet[2150]: I1009 00:53:32.202512 2150 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 00:53:32.202868 kubelet[2150]: E1009 00:53:32.202843 2150 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Oct 9 00:53:32.514223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104220489.mount: Deactivated successfully. Oct 9 00:53:32.519225 containerd[1440]: time="2024-10-09T00:53:32.519183495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:53:32.520057 containerd[1440]: time="2024-10-09T00:53:32.520016598Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 00:53:32.520889 containerd[1440]: time="2024-10-09T00:53:32.520650241Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:53:32.522324 containerd[1440]: time="2024-10-09T00:53:32.522192401Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 9 00:53:32.522393 containerd[1440]: time="2024-10-09T00:53:32.522358557Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:53:32.523091 containerd[1440]: time="2024-10-09T00:53:32.523063050Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:53:32.523445 containerd[1440]: time="2024-10-09T00:53:32.523324473Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 00:53:32.527345 containerd[1440]: time="2024-10-09T00:53:32.525104519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:53:32.527345 containerd[1440]: time="2024-10-09T00:53:32.526946008Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 501.896165ms" Oct 9 00:53:32.528724 containerd[1440]: time="2024-10-09T00:53:32.528684344Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 485.320162ms" Oct 9 00:53:32.535483 containerd[1440]: time="2024-10-09T00:53:32.535425222Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 495.610525ms" Oct 9 00:53:32.595245 kubelet[2150]: W1009 00:53:32.595208 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Oct 9 00:53:32.595665 kubelet[2150]: E1009 00:53:32.595624 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:53:32.629266 kubelet[2150]: W1009 00:53:32.629034 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Oct 9 00:53:32.629266 kubelet[2150]: E1009 00:53:32.629071 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:53:32.657438 containerd[1440]: time="2024-10-09T00:53:32.657103227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:53:32.657438 containerd[1440]: time="2024-10-09T00:53:32.657186285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:53:32.657438 containerd[1440]: time="2024-10-09T00:53:32.657207780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:53:32.658074 containerd[1440]: time="2024-10-09T00:53:32.658002616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:53:32.658074 containerd[1440]: time="2024-10-09T00:53:32.658052691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:53:32.658333 containerd[1440]: time="2024-10-09T00:53:32.658067862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:53:32.658333 containerd[1440]: time="2024-10-09T00:53:32.658213364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:53:32.659084 containerd[1440]: time="2024-10-09T00:53:32.658837280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:53:32.659084 containerd[1440]: time="2024-10-09T00:53:32.658989747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:53:32.659084 containerd[1440]: time="2024-10-09T00:53:32.659000435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:53:32.662062 containerd[1440]: time="2024-10-09T00:53:32.659542374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:53:32.662062 containerd[1440]: time="2024-10-09T00:53:32.659448108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:53:32.678494 systemd[1]: Started cri-containerd-3d1e8c689cf915b553cbfc2d3066c07a3dedb5d224e6e8ad5d5ba938ff811385.scope - libcontainer container 3d1e8c689cf915b553cbfc2d3066c07a3dedb5d224e6e8ad5d5ba938ff811385. Oct 9 00:53:32.679455 systemd[1]: Started cri-containerd-4e27778c6ef4ed883a64be47c6c8aeb1ff58e097818e0e972829ee93b14176a9.scope - libcontainer container 4e27778c6ef4ed883a64be47c6c8aeb1ff58e097818e0e972829ee93b14176a9. Oct 9 00:53:32.680364 systemd[1]: Started cri-containerd-b4db3ae13f68158293051410ca56c64970a00f7dac6444a6087b9132b20b3cef.scope - libcontainer container b4db3ae13f68158293051410ca56c64970a00f7dac6444a6087b9132b20b3cef. Oct 9 00:53:32.716030 containerd[1440]: time="2024-10-09T00:53:32.715984519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:1510be5a54dc8eef4f27b06886c891dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e27778c6ef4ed883a64be47c6c8aeb1ff58e097818e0e972829ee93b14176a9\"" Oct 9 00:53:32.718427 kubelet[2150]: E1009 00:53:32.718204 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:32.721299 containerd[1440]: time="2024-10-09T00:53:32.721269137Z" level=info msg="CreateContainer within sandbox \"4e27778c6ef4ed883a64be47c6c8aeb1ff58e097818e0e972829ee93b14176a9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 00:53:32.721487 containerd[1440]: time="2024-10-09T00:53:32.721453867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06488c00442307e618ae47aa5e692126,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d1e8c689cf915b553cbfc2d3066c07a3dedb5d224e6e8ad5d5ba938ff811385\"" Oct 9 00:53:32.721999 kubelet[2150]: E1009 00:53:32.721977 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:32.724742 containerd[1440]: time="2024-10-09T00:53:32.724639536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:344660bab292c4b91cf719f133c08ba2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4db3ae13f68158293051410ca56c64970a00f7dac6444a6087b9132b20b3cef\"" Oct 9 00:53:32.724962 containerd[1440]: time="2024-10-09T00:53:32.724941828Z" level=info msg="CreateContainer within sandbox \"3d1e8c689cf915b553cbfc2d3066c07a3dedb5d224e6e8ad5d5ba938ff811385\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 00:53:32.725496 kubelet[2150]: E1009 00:53:32.725476 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:32.728199 containerd[1440]: time="2024-10-09T00:53:32.728173250Z" level=info msg="CreateContainer within sandbox \"b4db3ae13f68158293051410ca56c64970a00f7dac6444a6087b9132b20b3cef\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 00:53:32.738190 containerd[1440]: time="2024-10-09T00:53:32.738148712Z" level=info msg="CreateContainer within sandbox \"4e27778c6ef4ed883a64be47c6c8aeb1ff58e097818e0e972829ee93b14176a9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d6cacdd2901a2791ca7a495e82639c4598d172b77138e638d136a9e620f4ba36\"" Oct 9 00:53:32.738655 containerd[1440]: time="2024-10-09T00:53:32.738630089Z" level=info msg="StartContainer for \"d6cacdd2901a2791ca7a495e82639c4598d172b77138e638d136a9e620f4ba36\"" Oct 9 00:53:32.740620 containerd[1440]: time="2024-10-09T00:53:32.740581815Z" level=info msg="CreateContainer within sandbox \"3d1e8c689cf915b553cbfc2d3066c07a3dedb5d224e6e8ad5d5ba938ff811385\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"95f2bfb4ab0982524d60059f3d8f7d10eb4665cff06b4fe0e531eaf73be9af0f\"" Oct 9 00:53:32.741197 containerd[1440]: time="2024-10-09T00:53:32.740974289Z" level=info msg="StartContainer for \"95f2bfb4ab0982524d60059f3d8f7d10eb4665cff06b4fe0e531eaf73be9af0f\"" Oct 9 00:53:32.743160 containerd[1440]: time="2024-10-09T00:53:32.743128477Z" level=info msg="CreateContainer within sandbox \"b4db3ae13f68158293051410ca56c64970a00f7dac6444a6087b9132b20b3cef\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ad8f5913eda118eca18e0097305c1b9cd5c3e21fe0a26d727a74ca26766f6769\"" Oct 9 00:53:32.743838 containerd[1440]: time="2024-10-09T00:53:32.743818840Z" level=info msg="StartContainer for \"ad8f5913eda118eca18e0097305c1b9cd5c3e21fe0a26d727a74ca26766f6769\"" Oct 9 00:53:32.768468 systemd[1]: Started cri-containerd-95f2bfb4ab0982524d60059f3d8f7d10eb4665cff06b4fe0e531eaf73be9af0f.scope - libcontainer container 95f2bfb4ab0982524d60059f3d8f7d10eb4665cff06b4fe0e531eaf73be9af0f. Oct 9 00:53:32.769295 systemd[1]: Started cri-containerd-d6cacdd2901a2791ca7a495e82639c4598d172b77138e638d136a9e620f4ba36.scope - libcontainer container d6cacdd2901a2791ca7a495e82639c4598d172b77138e638d136a9e620f4ba36. Oct 9 00:53:32.773699 systemd[1]: Started cri-containerd-ad8f5913eda118eca18e0097305c1b9cd5c3e21fe0a26d727a74ca26766f6769.scope - libcontainer container ad8f5913eda118eca18e0097305c1b9cd5c3e21fe0a26d727a74ca26766f6769. Oct 9 00:53:32.782421 kubelet[2150]: E1009 00:53:32.782002 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="1.6s" Oct 9 00:53:32.823340 containerd[1440]: time="2024-10-09T00:53:32.823228140Z" level=info msg="StartContainer for \"ad8f5913eda118eca18e0097305c1b9cd5c3e21fe0a26d727a74ca26766f6769\" returns successfully" Oct 9 00:53:32.823457 containerd[1440]: time="2024-10-09T00:53:32.823437887Z" level=info msg="StartContainer for \"95f2bfb4ab0982524d60059f3d8f7d10eb4665cff06b4fe0e531eaf73be9af0f\" returns successfully" Oct 9 00:53:32.823532 containerd[1440]: time="2024-10-09T00:53:32.823507816Z" level=info msg="StartContainer for \"d6cacdd2901a2791ca7a495e82639c4598d172b77138e638d136a9e620f4ba36\" returns successfully" Oct 9 00:53:32.931938 kubelet[2150]: W1009 00:53:32.931851 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Oct 9 00:53:32.931938 kubelet[2150]: E1009 00:53:32.931930 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:53:32.955624 kubelet[2150]: W1009 00:53:32.954581 2150 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Oct 9 00:53:32.955624 kubelet[2150]: E1009 00:53:32.954656 2150 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Oct 9 00:53:33.004823 kubelet[2150]: I1009 00:53:33.004790 2150 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 00:53:33.406062 kubelet[2150]: E1009 00:53:33.406032 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:33.407387 kubelet[2150]: E1009 00:53:33.407367 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:33.408254 kubelet[2150]: E1009 00:53:33.408235 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:34.410431 kubelet[2150]: E1009 00:53:34.410396 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:34.411123 kubelet[2150]: E1009 00:53:34.411090 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:34.807462 kubelet[2150]: E1009 00:53:34.807291 2150 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 00:53:34.882270 kubelet[2150]: I1009 00:53:34.882214 2150 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Oct 9 00:53:34.882270 kubelet[2150]: E1009 00:53:34.882262 2150 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 9 00:53:34.893256 kubelet[2150]: E1009 00:53:34.893209 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:53:34.993638 kubelet[2150]: E1009 00:53:34.993596 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:53:35.094431 kubelet[2150]: E1009 00:53:35.094291 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:53:35.194890 kubelet[2150]: E1009 00:53:35.194827 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:53:35.294958 kubelet[2150]: E1009 00:53:35.294894 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:53:35.395789 kubelet[2150]: E1009 00:53:35.395740 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:53:35.411408 kubelet[2150]: E1009 00:53:35.411378 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:35.496593 kubelet[2150]: E1009 00:53:35.496553 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:53:35.597576 kubelet[2150]: E1009 00:53:35.597537 2150 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:53:36.114132 kubelet[2150]: E1009 00:53:36.114094 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:36.369366 kubelet[2150]: I1009 00:53:36.369332 2150 apiserver.go:52] "Watching apiserver" Oct 9 00:53:36.378637 kubelet[2150]: I1009 00:53:36.378603 2150 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 9 00:53:36.412291 kubelet[2150]: E1009 00:53:36.412232 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:36.979196 systemd[1]: Reloading requested from client PID 2431 ('systemctl') (unit session-7.scope)... Oct 9 00:53:36.979211 systemd[1]: Reloading... Oct 9 00:53:37.058399 zram_generator::config[2470]: No configuration found. Oct 9 00:53:37.138805 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:53:37.205561 systemd[1]: Reloading finished in 226 ms. Oct 9 00:53:37.239162 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:53:37.257590 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 00:53:37.257858 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:53:37.257942 systemd[1]: kubelet.service: Consumed 1.721s CPU time, 119.7M memory peak, 0B memory swap peak. Oct 9 00:53:37.267676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:53:37.361568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:53:37.366025 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 00:53:37.399957 kubelet[2512]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:53:37.399957 kubelet[2512]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 00:53:37.399957 kubelet[2512]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:53:37.400425 kubelet[2512]: I1009 00:53:37.399998 2512 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 00:53:37.405899 kubelet[2512]: I1009 00:53:37.405869 2512 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 9 00:53:37.405899 kubelet[2512]: I1009 00:53:37.405897 2512 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 00:53:37.406110 kubelet[2512]: I1009 00:53:37.406095 2512 server.go:929] "Client rotation is on, will bootstrap in background" Oct 9 00:53:37.407382 kubelet[2512]: I1009 00:53:37.407357 2512 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 00:53:37.409505 kubelet[2512]: I1009 00:53:37.409462 2512 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 00:53:37.415333 kubelet[2512]: E1009 00:53:37.413814 2512 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 9 00:53:37.415333 kubelet[2512]: I1009 00:53:37.413848 2512 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 9 00:53:37.415903 kubelet[2512]: I1009 00:53:37.415887 2512 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 00:53:37.415989 kubelet[2512]: I1009 00:53:37.415979 2512 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 9 00:53:37.416093 kubelet[2512]: I1009 00:53:37.416074 2512 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 00:53:37.416239 kubelet[2512]: I1009 00:53:37.416094 2512 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 9 00:53:37.416323 kubelet[2512]: I1009 00:53:37.416250 2512 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 00:53:37.416323 kubelet[2512]: I1009 00:53:37.416259 2512 container_manager_linux.go:300] "Creating device plugin manager" Oct 9 00:53:37.416382 kubelet[2512]: I1009 00:53:37.416307 2512 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:53:37.416429 kubelet[2512]: I1009 00:53:37.416418 2512 kubelet.go:408] "Attempting to sync node with API server" Oct 9 00:53:37.416507 kubelet[2512]: I1009 00:53:37.416431 2512 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 00:53:37.416507 kubelet[2512]: I1009 00:53:37.416449 2512 kubelet.go:314] "Adding apiserver pod source" Oct 9 00:53:37.416507 kubelet[2512]: I1009 00:53:37.416459 2512 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 00:53:37.418551 kubelet[2512]: I1009 00:53:37.417417 2512 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 00:53:37.418551 kubelet[2512]: I1009 00:53:37.418063 2512 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 00:53:37.419006 kubelet[2512]: I1009 00:53:37.418972 2512 server.go:1269] "Started kubelet" Oct 9 00:53:37.420185 kubelet[2512]: I1009 00:53:37.420135 2512 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 00:53:37.420327 kubelet[2512]: I1009 00:53:37.420252 2512 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 00:53:37.421021 kubelet[2512]: I1009 00:53:37.420983 2512 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 00:53:37.426681 kubelet[2512]: I1009 00:53:37.426646 2512 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 00:53:37.428086 kubelet[2512]: I1009 00:53:37.428050 2512 server.go:460] "Adding debug handlers to kubelet server" Oct 9 00:53:37.428510 kubelet[2512]: I1009 00:53:37.428468 2512 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 9 00:53:37.430209 kubelet[2512]: I1009 00:53:37.430188 2512 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 9 00:53:37.430341 kubelet[2512]: I1009 00:53:37.430295 2512 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 9 00:53:37.430505 kubelet[2512]: I1009 00:53:37.430484 2512 reconciler.go:26] "Reconciler: start to sync state" Oct 9 00:53:37.431526 kubelet[2512]: E1009 00:53:37.431498 2512 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 00:53:37.431945 kubelet[2512]: E1009 00:53:37.431915 2512 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:53:37.433283 kubelet[2512]: I1009 00:53:37.433254 2512 factory.go:221] Registration of the systemd container factory successfully Oct 9 00:53:37.434159 kubelet[2512]: I1009 00:53:37.434132 2512 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 00:53:37.442289 kubelet[2512]: I1009 00:53:37.442260 2512 factory.go:221] Registration of the containerd container factory successfully Oct 9 00:53:37.443764 kubelet[2512]: I1009 00:53:37.443713 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 00:53:37.444638 kubelet[2512]: I1009 00:53:37.444614 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 00:53:37.444638 kubelet[2512]: I1009 00:53:37.444636 2512 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 00:53:37.444755 kubelet[2512]: I1009 00:53:37.444652 2512 kubelet.go:2321] "Starting kubelet main sync loop" Oct 9 00:53:37.444755 kubelet[2512]: E1009 00:53:37.444694 2512 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 00:53:37.475153 kubelet[2512]: I1009 00:53:37.475128 2512 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 00:53:37.475153 kubelet[2512]: I1009 00:53:37.475143 2512 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 00:53:37.475153 kubelet[2512]: I1009 00:53:37.475161 2512 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:53:37.475369 kubelet[2512]: I1009 00:53:37.475305 2512 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 00:53:37.475407 kubelet[2512]: I1009 00:53:37.475368 2512 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 00:53:37.475407 kubelet[2512]: I1009 00:53:37.475393 2512 policy_none.go:49] "None policy: Start" Oct 9 00:53:37.475956 kubelet[2512]: I1009 00:53:37.475943 2512 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 00:53:37.475992 kubelet[2512]: I1009 00:53:37.475963 2512 state_mem.go:35] "Initializing new in-memory state store" Oct 9 00:53:37.476127 kubelet[2512]: I1009 00:53:37.476112 2512 state_mem.go:75] "Updated machine memory state" Oct 9 00:53:37.479741 kubelet[2512]: I1009 00:53:37.479718 2512 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 00:53:37.479883 kubelet[2512]: I1009 00:53:37.479868 2512 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 9 00:53:37.479909 kubelet[2512]: I1009 00:53:37.479884 2512 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 00:53:37.480348 kubelet[2512]: I1009 00:53:37.480254 2512 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 00:53:37.554527 kubelet[2512]: E1009 00:53:37.554410 2512 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 9 00:53:37.586110 kubelet[2512]: I1009 00:53:37.586001 2512 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Oct 9 00:53:37.594397 kubelet[2512]: I1009 00:53:37.594343 2512 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Oct 9 00:53:37.594536 kubelet[2512]: I1009 00:53:37.594438 2512 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Oct 9 00:53:37.631705 kubelet[2512]: I1009 00:53:37.631489 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06488c00442307e618ae47aa5e692126-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06488c00442307e618ae47aa5e692126\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:53:37.631705 kubelet[2512]: I1009 00:53:37.631528 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:53:37.631705 kubelet[2512]: I1009 00:53:37.631554 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06488c00442307e618ae47aa5e692126-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06488c00442307e618ae47aa5e692126\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:53:37.631705 kubelet[2512]: I1009 00:53:37.631572 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06488c00442307e618ae47aa5e692126-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06488c00442307e618ae47aa5e692126\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:53:37.631705 kubelet[2512]: I1009 00:53:37.631590 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:53:37.631937 kubelet[2512]: I1009 00:53:37.631605 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:53:37.631937 kubelet[2512]: I1009 00:53:37.631621 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:53:37.631937 kubelet[2512]: I1009 00:53:37.631637 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/344660bab292c4b91cf719f133c08ba2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"344660bab292c4b91cf719f133c08ba2\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:53:37.631937 kubelet[2512]: I1009 00:53:37.631657 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1510be5a54dc8eef4f27b06886c891dc-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"1510be5a54dc8eef4f27b06886c891dc\") " pod="kube-system/kube-scheduler-localhost" Oct 9 00:53:37.855104 kubelet[2512]: E1009 00:53:37.854703 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:37.855104 kubelet[2512]: E1009 00:53:37.855067 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:37.855104 kubelet[2512]: E1009 00:53:37.855197 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:37.980405 sudo[2548]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 9 00:53:37.980698 sudo[2548]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 9 00:53:38.407102 sudo[2548]: pam_unix(sudo:session): session closed for user root Oct 9 00:53:38.417372 kubelet[2512]: I1009 00:53:38.417324 2512 apiserver.go:52] "Watching apiserver" Oct 9 00:53:38.430528 kubelet[2512]: I1009 00:53:38.430484 2512 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 9 00:53:38.458517 kubelet[2512]: E1009 00:53:38.457574 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:38.458517 kubelet[2512]: E1009 00:53:38.458370 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:38.463960 kubelet[2512]: I1009 00:53:38.463900 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.463887246 podStartE2EDuration="1.463887246s" podCreationTimestamp="2024-10-09 00:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:53:38.462871447 +0000 UTC m=+1.092907163" watchObservedRunningTime="2024-10-09 00:53:38.463887246 +0000 UTC m=+1.093922922" Oct 9 00:53:38.465307 kubelet[2512]: E1009 00:53:38.464952 2512 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 00:53:38.465307 kubelet[2512]: E1009 00:53:38.465075 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:38.484236 kubelet[2512]: I1009 00:53:38.482656 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.4826168920000002 podStartE2EDuration="2.482616892s" podCreationTimestamp="2024-10-09 00:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:53:38.473270235 +0000 UTC m=+1.103305911" watchObservedRunningTime="2024-10-09 00:53:38.482616892 +0000 UTC m=+1.112652528" Oct 9 00:53:38.491285 kubelet[2512]: I1009 00:53:38.491228 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.491214193 podStartE2EDuration="1.491214193s" podCreationTimestamp="2024-10-09 00:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:53:38.482377617 +0000 UTC m=+1.112413293" watchObservedRunningTime="2024-10-09 00:53:38.491214193 +0000 UTC m=+1.121249869" Oct 9 00:53:39.459139 kubelet[2512]: E1009 00:53:39.459097 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:39.459533 kubelet[2512]: E1009 00:53:39.459164 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:39.952995 kubelet[2512]: E1009 00:53:39.952966 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:40.054596 sudo[1624]: pam_unix(sudo:session): session closed for user root Oct 9 00:53:40.056563 sshd[1621]: pam_unix(sshd:session): session closed for user core Oct 9 00:53:40.059049 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:39544.service: Deactivated successfully. Oct 9 00:53:40.061655 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 00:53:40.061811 systemd[1]: session-7.scope: Consumed 7.854s CPU time, 154.6M memory peak, 0B memory swap peak. Oct 9 00:53:40.062902 systemd-logind[1429]: Session 7 logged out. Waiting for processes to exit. Oct 9 00:53:40.063765 systemd-logind[1429]: Removed session 7. Oct 9 00:53:41.594733 kubelet[2512]: I1009 00:53:41.594605 2512 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 00:53:41.595730 containerd[1440]: time="2024-10-09T00:53:41.595647764Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 00:53:41.597010 kubelet[2512]: I1009 00:53:41.596130 2512 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 00:53:42.525711 kubelet[2512]: W1009 00:53:42.525656 2512 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 9 00:53:42.525961 kubelet[2512]: E1009 00:53:42.525717 2512 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Oct 9 00:53:42.525961 kubelet[2512]: W1009 00:53:42.525770 2512 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Oct 9 00:53:42.525961 kubelet[2512]: E1009 00:53:42.525781 2512 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Oct 9 00:53:42.533027 systemd[1]: Created slice kubepods-besteffort-pod91755aea_8fa4_452e_8c7b_68a854fb6b24.slice - libcontainer container kubepods-besteffort-pod91755aea_8fa4_452e_8c7b_68a854fb6b24.slice. Oct 9 00:53:42.548573 systemd[1]: Created slice kubepods-burstable-pode7697970_fccd_4ef0_985f_603ec2eb0704.slice - libcontainer container kubepods-burstable-pode7697970_fccd_4ef0_985f_603ec2eb0704.slice. Oct 9 00:53:42.568200 kubelet[2512]: I1009 00:53:42.568145 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-hostproc\") pod \"cilium-4m8pl\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " pod="kube-system/cilium-4m8pl" Oct 9 00:53:42.568200 kubelet[2512]: I1009 00:53:42.568211 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-host-proc-sys-net\") pod \"cilium-4m8pl\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " pod="kube-system/cilium-4m8pl" Oct 9 00:53:42.568200 kubelet[2512]: I1009 00:53:42.568245 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-cilium-cgroup\") pod \"cilium-4m8pl\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " pod="kube-system/cilium-4m8pl" Oct 9 00:53:42.568200 kubelet[2512]: I1009 00:53:42.568291 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e7697970-fccd-4ef0-985f-603ec2eb0704-clustermesh-secrets\") pod \"cilium-4m8pl\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " pod="kube-system/cilium-4m8pl" Oct 9 00:53:42.568200 kubelet[2512]: I1009 00:53:42.568358 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7697970-fccd-4ef0-985f-603ec2eb0704-cilium-config-path\") pod \"cilium-4m8pl\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " pod="kube-system/cilium-4m8pl" Oct 9 00:53:42.568200 kubelet[2512]: I1009 00:53:42.568376 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e7697970-fccd-4ef0-985f-603ec2eb0704-hubble-tls\") pod \"cilium-4m8pl\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " pod="kube-system/cilium-4m8pl" Oct 9 00:53:42.568894 kubelet[2512]: I1009 00:53:42.568392 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91755aea-8fa4-452e-8c7b-68a854fb6b24-lib-modules\") pod \"kube-proxy-b585h\" (UID: \"91755aea-8fa4-452e-8c7b-68a854fb6b24\") " pod="kube-system/kube-proxy-b585h" Oct 9 00:53:42.568894 kubelet[2512]: I1009 00:53:42.568408 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-cilium-run\") pod \"cilium-4m8pl\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " pod="kube-system/cilium-4m8pl" Oct 9 00:53:42.568894 kubelet[2512]: I1009 00:53:42.568423 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-cni-path\") pod \"cilium-4m8pl\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " pod="kube-system/cilium-4m8pl" Oct 9 00:53:42.568894 kubelet[2512]: I1009 00:53:42.568437 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-lib-modules\") pod \"cilium-4m8pl\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " pod="kube-system/cilium-4m8pl" Oct 9 00:53:42.568894 kubelet[2512]: I1009 00:53:42.568452 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mthjn\" (UniqueName: \"kubernetes.io/projected/e7697970-fccd-4ef0-985f-603ec2eb0704-kube-api-access-mthjn\") pod \"cilium-4m8pl\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " pod="kube-system/cilium-4m8pl" Oct 9 00:53:42.568894 kubelet[2512]: I1009 00:53:42.568466 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-etc-cni-netd\") pod \"cilium-4m8pl\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " pod="kube-system/cilium-4m8pl" Oct 9 00:53:42.569009 kubelet[2512]: I1009 00:53:42.568482 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-xtables-lock\") pod \"cilium-4m8pl\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " pod="kube-system/cilium-4m8pl" Oct 9 00:53:42.569009 kubelet[2512]: I1009 00:53:42.568497 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91755aea-8fa4-452e-8c7b-68a854fb6b24-kube-proxy\") pod \"kube-proxy-b585h\" (UID: \"91755aea-8fa4-452e-8c7b-68a854fb6b24\") " pod="kube-system/kube-proxy-b585h" Oct 9 00:53:42.569009 kubelet[2512]: I1009 00:53:42.568514 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-bpf-maps\") pod \"cilium-4m8pl\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " pod="kube-system/cilium-4m8pl" Oct 9 00:53:42.569009 kubelet[2512]: I1009 00:53:42.568527 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91755aea-8fa4-452e-8c7b-68a854fb6b24-xtables-lock\") pod \"kube-proxy-b585h\" (UID: \"91755aea-8fa4-452e-8c7b-68a854fb6b24\") " pod="kube-system/kube-proxy-b585h" Oct 9 00:53:42.569009 kubelet[2512]: I1009 00:53:42.568543 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxzgq\" (UniqueName: \"kubernetes.io/projected/91755aea-8fa4-452e-8c7b-68a854fb6b24-kube-api-access-cxzgq\") pod \"kube-proxy-b585h\" (UID: \"91755aea-8fa4-452e-8c7b-68a854fb6b24\") " pod="kube-system/kube-proxy-b585h" Oct 9 00:53:42.569103 kubelet[2512]: I1009 00:53:42.568559 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-host-proc-sys-kernel\") pod \"cilium-4m8pl\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " pod="kube-system/cilium-4m8pl" Oct 9 00:53:42.737170 systemd[1]: Created slice kubepods-besteffort-podb092076b_b181_4c59_b908_935cbb7c7037.slice - libcontainer container kubepods-besteffort-podb092076b_b181_4c59_b908_935cbb7c7037.slice. Oct 9 00:53:42.770245 kubelet[2512]: I1009 00:53:42.770184 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjbch\" (UniqueName: \"kubernetes.io/projected/b092076b-b181-4c59-b908-935cbb7c7037-kube-api-access-xjbch\") pod \"cilium-operator-5d85765b45-m9rm2\" (UID: \"b092076b-b181-4c59-b908-935cbb7c7037\") " pod="kube-system/cilium-operator-5d85765b45-m9rm2" Oct 9 00:53:42.770245 kubelet[2512]: I1009 00:53:42.770236 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b092076b-b181-4c59-b908-935cbb7c7037-cilium-config-path\") pod \"cilium-operator-5d85765b45-m9rm2\" (UID: \"b092076b-b181-4c59-b908-935cbb7c7037\") " pod="kube-system/cilium-operator-5d85765b45-m9rm2" Oct 9 00:53:43.670981 kubelet[2512]: E1009 00:53:43.670879 2512 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Oct 9 00:53:43.671113 kubelet[2512]: E1009 00:53:43.671001 2512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91755aea-8fa4-452e-8c7b-68a854fb6b24-kube-proxy podName:91755aea-8fa4-452e-8c7b-68a854fb6b24 nodeName:}" failed. No retries permitted until 2024-10-09 00:53:44.17097027 +0000 UTC m=+6.801005946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/91755aea-8fa4-452e-8c7b-68a854fb6b24-kube-proxy") pod "kube-proxy-b585h" (UID: "91755aea-8fa4-452e-8c7b-68a854fb6b24") : failed to sync configmap cache: timed out waiting for the condition Oct 9 00:53:43.679558 kubelet[2512]: E1009 00:53:43.679527 2512 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 9 00:53:43.679558 kubelet[2512]: E1009 00:53:43.679556 2512 projected.go:194] Error preparing data for projected volume kube-api-access-cxzgq for pod kube-system/kube-proxy-b585h: failed to sync configmap cache: timed out waiting for the condition Oct 9 00:53:43.679650 kubelet[2512]: E1009 00:53:43.679601 2512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91755aea-8fa4-452e-8c7b-68a854fb6b24-kube-api-access-cxzgq podName:91755aea-8fa4-452e-8c7b-68a854fb6b24 nodeName:}" failed. No retries permitted until 2024-10-09 00:53:44.179588703 +0000 UTC m=+6.809624379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cxzgq" (UniqueName: "kubernetes.io/projected/91755aea-8fa4-452e-8c7b-68a854fb6b24-kube-api-access-cxzgq") pod "kube-proxy-b585h" (UID: "91755aea-8fa4-452e-8c7b-68a854fb6b24") : failed to sync configmap cache: timed out waiting for the condition Oct 9 00:53:43.681013 kubelet[2512]: E1009 00:53:43.680849 2512 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 9 00:53:43.681013 kubelet[2512]: E1009 00:53:43.680870 2512 projected.go:194] Error preparing data for projected volume kube-api-access-mthjn for pod kube-system/cilium-4m8pl: failed to sync configmap cache: timed out waiting for the condition Oct 9 00:53:43.681013 kubelet[2512]: E1009 00:53:43.680908 2512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7697970-fccd-4ef0-985f-603ec2eb0704-kube-api-access-mthjn podName:e7697970-fccd-4ef0-985f-603ec2eb0704 nodeName:}" failed. No retries permitted until 2024-10-09 00:53:44.180891059 +0000 UTC m=+6.810926735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mthjn" (UniqueName: "kubernetes.io/projected/e7697970-fccd-4ef0-985f-603ec2eb0704-kube-api-access-mthjn") pod "cilium-4m8pl" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704") : failed to sync configmap cache: timed out waiting for the condition Oct 9 00:53:43.887163 kubelet[2512]: E1009 00:53:43.887124 2512 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 9 00:53:43.887163 kubelet[2512]: E1009 00:53:43.887149 2512 projected.go:194] Error preparing data for projected volume kube-api-access-xjbch for pod kube-system/cilium-operator-5d85765b45-m9rm2: failed to sync configmap cache: timed out waiting for the condition Oct 9 00:53:43.887602 kubelet[2512]: E1009 00:53:43.887190 2512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b092076b-b181-4c59-b908-935cbb7c7037-kube-api-access-xjbch podName:b092076b-b181-4c59-b908-935cbb7c7037 nodeName:}" failed. No retries permitted until 2024-10-09 00:53:44.38717728 +0000 UTC m=+7.017212956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xjbch" (UniqueName: "kubernetes.io/projected/b092076b-b181-4c59-b908-935cbb7c7037-kube-api-access-xjbch") pod "cilium-operator-5d85765b45-m9rm2" (UID: "b092076b-b181-4c59-b908-935cbb7c7037") : failed to sync configmap cache: timed out waiting for the condition Oct 9 00:53:44.343666 kubelet[2512]: E1009 00:53:44.343569 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:44.344341 containerd[1440]: time="2024-10-09T00:53:44.344119098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b585h,Uid:91755aea-8fa4-452e-8c7b-68a854fb6b24,Namespace:kube-system,Attempt:0,}" Oct 9 00:53:44.350923 kubelet[2512]: E1009 00:53:44.350898 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:44.352628 containerd[1440]: time="2024-10-09T00:53:44.352069617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4m8pl,Uid:e7697970-fccd-4ef0-985f-603ec2eb0704,Namespace:kube-system,Attempt:0,}" Oct 9 00:53:44.365869 containerd[1440]: time="2024-10-09T00:53:44.365793658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:53:44.365869 containerd[1440]: time="2024-10-09T00:53:44.365838907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:53:44.365999 containerd[1440]: time="2024-10-09T00:53:44.365848829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:53:44.365999 containerd[1440]: time="2024-10-09T00:53:44.365906801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:53:44.375741 containerd[1440]: time="2024-10-09T00:53:44.375666324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:53:44.375840 containerd[1440]: time="2024-10-09T00:53:44.375720775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:53:44.375840 containerd[1440]: time="2024-10-09T00:53:44.375731698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:53:44.375840 containerd[1440]: time="2024-10-09T00:53:44.375791390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:53:44.385633 systemd[1]: Started cri-containerd-863c6eb92d389cf258adbba4bf4b712167c71e5ac0b72a8008b0ffafa72ef3f1.scope - libcontainer container 863c6eb92d389cf258adbba4bf4b712167c71e5ac0b72a8008b0ffafa72ef3f1. Oct 9 00:53:44.387898 systemd[1]: Started cri-containerd-44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840.scope - libcontainer container 44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840. Oct 9 00:53:44.408496 containerd[1440]: time="2024-10-09T00:53:44.408458482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b585h,Uid:91755aea-8fa4-452e-8c7b-68a854fb6b24,Namespace:kube-system,Attempt:0,} returns sandbox id \"863c6eb92d389cf258adbba4bf4b712167c71e5ac0b72a8008b0ffafa72ef3f1\"" Oct 9 00:53:44.409151 kubelet[2512]: E1009 00:53:44.409123 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:44.412952 containerd[1440]: time="2024-10-09T00:53:44.412917619Z" level=info msg="CreateContainer within sandbox \"863c6eb92d389cf258adbba4bf4b712167c71e5ac0b72a8008b0ffafa72ef3f1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 00:53:44.413997 containerd[1440]: time="2024-10-09T00:53:44.413974632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4m8pl,Uid:e7697970-fccd-4ef0-985f-603ec2eb0704,Namespace:kube-system,Attempt:0,} returns sandbox id \"44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840\"" Oct 9 00:53:44.415366 kubelet[2512]: E1009 00:53:44.415329 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:44.417905 containerd[1440]: time="2024-10-09T00:53:44.417871535Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 9 00:53:44.430823 containerd[1440]: time="2024-10-09T00:53:44.430781453Z" level=info msg="CreateContainer within sandbox \"863c6eb92d389cf258adbba4bf4b712167c71e5ac0b72a8008b0ffafa72ef3f1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8d5454877cd0c6d953b12e942afd0e51df74a61ecf47d2809156c90148d44e10\"" Oct 9 00:53:44.431507 containerd[1440]: time="2024-10-09T00:53:44.431481754Z" level=info msg="StartContainer for \"8d5454877cd0c6d953b12e942afd0e51df74a61ecf47d2809156c90148d44e10\"" Oct 9 00:53:44.456476 systemd[1]: Started cri-containerd-8d5454877cd0c6d953b12e942afd0e51df74a61ecf47d2809156c90148d44e10.scope - libcontainer container 8d5454877cd0c6d953b12e942afd0e51df74a61ecf47d2809156c90148d44e10. Oct 9 00:53:44.491893 containerd[1440]: time="2024-10-09T00:53:44.491831455Z" level=info msg="StartContainer for \"8d5454877cd0c6d953b12e942afd0e51df74a61ecf47d2809156c90148d44e10\" returns successfully" Oct 9 00:53:44.541121 kubelet[2512]: E1009 00:53:44.541066 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:44.541731 containerd[1440]: time="2024-10-09T00:53:44.541681764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m9rm2,Uid:b092076b-b181-4c59-b908-935cbb7c7037,Namespace:kube-system,Attempt:0,}" Oct 9 00:53:44.572389 containerd[1440]: time="2024-10-09T00:53:44.572177739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:53:44.572389 containerd[1440]: time="2024-10-09T00:53:44.572231630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:53:44.572389 containerd[1440]: time="2024-10-09T00:53:44.572245833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:53:44.572389 containerd[1440]: time="2024-10-09T00:53:44.572328730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:53:44.590491 systemd[1]: Started cri-containerd-3ce08b925e0d1ea5455994dc5776027f74932523802aa8959250b9fdeb81c30c.scope - libcontainer container 3ce08b925e0d1ea5455994dc5776027f74932523802aa8959250b9fdeb81c30c. Oct 9 00:53:44.622555 containerd[1440]: time="2024-10-09T00:53:44.622510145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-m9rm2,Uid:b092076b-b181-4c59-b908-935cbb7c7037,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ce08b925e0d1ea5455994dc5776027f74932523802aa8959250b9fdeb81c30c\"" Oct 9 00:53:44.623359 kubelet[2512]: E1009 00:53:44.623166 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:45.473934 kubelet[2512]: E1009 00:53:45.473713 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:46.478044 kubelet[2512]: E1009 00:53:46.478015 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:47.996809 kubelet[2512]: E1009 00:53:47.996774 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:48.007036 kubelet[2512]: I1009 00:53:48.006990 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b585h" podStartSLOduration=6.00697539 podStartE2EDuration="6.00697539s" podCreationTimestamp="2024-10-09 00:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:53:45.482590215 +0000 UTC m=+8.112625891" watchObservedRunningTime="2024-10-09 00:53:48.00697539 +0000 UTC m=+10.637011066" Oct 9 00:53:48.497632 kubelet[2512]: E1009 00:53:48.497591 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:49.013595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2401415181.mount: Deactivated successfully. Oct 9 00:53:49.050809 kubelet[2512]: E1009 00:53:49.050768 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:49.965875 kubelet[2512]: E1009 00:53:49.965802 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:52.146417 update_engine[1430]: I20241009 00:53:52.146344 1430 update_attempter.cc:509] Updating boot flags... Oct 9 00:53:52.309342 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2913) Oct 9 00:53:52.329234 containerd[1440]: time="2024-10-09T00:53:52.329184032Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:52.332494 containerd[1440]: time="2024-10-09T00:53:52.331551985Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651646" Oct 9 00:53:52.335198 containerd[1440]: time="2024-10-09T00:53:52.333389867Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:52.336009 containerd[1440]: time="2024-10-09T00:53:52.335872955Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.917961692s" Oct 9 00:53:52.336009 containerd[1440]: time="2024-10-09T00:53:52.335913240Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 9 00:53:52.338825 containerd[1440]: time="2024-10-09T00:53:52.338771618Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 9 00:53:52.340984 containerd[1440]: time="2024-10-09T00:53:52.340862694Z" level=info msg="CreateContainer within sandbox \"44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 9 00:53:52.357376 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2917) Oct 9 00:53:52.370046 containerd[1440]: time="2024-10-09T00:53:52.369983338Z" level=info msg="CreateContainer within sandbox \"44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17\"" Oct 9 00:53:52.370573 containerd[1440]: time="2024-10-09T00:53:52.370534971Z" level=info msg="StartContainer for \"db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17\"" Oct 9 00:53:52.393578 systemd[1]: run-containerd-runc-k8s.io-db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17-runc.c0g8yw.mount: Deactivated successfully. Oct 9 00:53:52.406485 systemd[1]: Started cri-containerd-db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17.scope - libcontainer container db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17. Oct 9 00:53:52.479736 systemd[1]: cri-containerd-db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17.scope: Deactivated successfully. Oct 9 00:53:52.486842 containerd[1440]: time="2024-10-09T00:53:52.486807439Z" level=info msg="StartContainer for \"db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17\" returns successfully" Oct 9 00:53:52.506050 kubelet[2512]: E1009 00:53:52.505960 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:52.782439 containerd[1440]: time="2024-10-09T00:53:52.772306165Z" level=info msg="shim disconnected" id=db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17 namespace=k8s.io Oct 9 00:53:52.782439 containerd[1440]: time="2024-10-09T00:53:52.782237276Z" level=warning msg="cleaning up after shim disconnected" id=db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17 namespace=k8s.io Oct 9 00:53:52.782439 containerd[1440]: time="2024-10-09T00:53:52.782254198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:53:53.363099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17-rootfs.mount: Deactivated successfully. Oct 9 00:53:53.388670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958090798.mount: Deactivated successfully. Oct 9 00:53:53.513949 kubelet[2512]: E1009 00:53:53.513903 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:53.516333 containerd[1440]: time="2024-10-09T00:53:53.516274833Z" level=info msg="CreateContainer within sandbox \"44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 9 00:53:53.562674 containerd[1440]: time="2024-10-09T00:53:53.562629455Z" level=info msg="CreateContainer within sandbox \"44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64\"" Oct 9 00:53:53.563161 containerd[1440]: time="2024-10-09T00:53:53.563123997Z" level=info msg="StartContainer for \"82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64\"" Oct 9 00:53:53.586504 systemd[1]: Started cri-containerd-82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64.scope - libcontainer container 82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64. Oct 9 00:53:53.616226 containerd[1440]: time="2024-10-09T00:53:53.614141165Z" level=info msg="StartContainer for \"82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64\" returns successfully" Oct 9 00:53:53.633424 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 00:53:53.634002 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:53:53.634153 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:53:53.639654 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:53:53.639834 systemd[1]: cri-containerd-82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64.scope: Deactivated successfully. Oct 9 00:53:53.655688 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:53:53.680095 containerd[1440]: time="2024-10-09T00:53:53.680039282Z" level=info msg="shim disconnected" id=82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64 namespace=k8s.io Oct 9 00:53:53.680095 containerd[1440]: time="2024-10-09T00:53:53.680089008Z" level=warning msg="cleaning up after shim disconnected" id=82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64 namespace=k8s.io Oct 9 00:53:53.680095 containerd[1440]: time="2024-10-09T00:53:53.680097969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:53:53.690583 containerd[1440]: time="2024-10-09T00:53:53.690460151Z" level=warning msg="cleanup warnings time=\"2024-10-09T00:53:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 00:53:53.834799 containerd[1440]: time="2024-10-09T00:53:53.834755395Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:53.835433 containerd[1440]: time="2024-10-09T00:53:53.835284101Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138294" Oct 9 00:53:53.836129 containerd[1440]: time="2024-10-09T00:53:53.836093243Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:53:53.837530 containerd[1440]: time="2024-10-09T00:53:53.837496579Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.498672434s" Oct 9 00:53:53.837697 containerd[1440]: time="2024-10-09T00:53:53.837615034Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 9 00:53:53.840201 containerd[1440]: time="2024-10-09T00:53:53.840170755Z" level=info msg="CreateContainer within sandbox \"3ce08b925e0d1ea5455994dc5776027f74932523802aa8959250b9fdeb81c30c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 9 00:53:53.849615 containerd[1440]: time="2024-10-09T00:53:53.849574616Z" level=info msg="CreateContainer within sandbox \"3ce08b925e0d1ea5455994dc5776027f74932523802aa8959250b9fdeb81c30c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99\"" Oct 9 00:53:53.851077 containerd[1440]: time="2024-10-09T00:53:53.851002516Z" level=info msg="StartContainer for \"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99\"" Oct 9 00:53:53.876505 systemd[1]: Started cri-containerd-9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99.scope - libcontainer container 9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99. Oct 9 00:53:53.899859 containerd[1440]: time="2024-10-09T00:53:53.899817127Z" level=info msg="StartContainer for \"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99\" returns successfully" Oct 9 00:53:54.364193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2493880215.mount: Deactivated successfully. Oct 9 00:53:54.516698 kubelet[2512]: E1009 00:53:54.516619 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:54.520764 kubelet[2512]: E1009 00:53:54.520557 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:54.523255 containerd[1440]: time="2024-10-09T00:53:54.523215852Z" level=info msg="CreateContainer within sandbox \"44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 9 00:53:54.536409 kubelet[2512]: I1009 00:53:54.535723 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-m9rm2" podStartSLOduration=3.321240597 podStartE2EDuration="12.535706586s" podCreationTimestamp="2024-10-09 00:53:42 +0000 UTC" firstStartedPulling="2024-10-09 00:53:44.623836652 +0000 UTC m=+7.253872288" lastFinishedPulling="2024-10-09 00:53:53.838302601 +0000 UTC m=+16.468338277" observedRunningTime="2024-10-09 00:53:54.535132277 +0000 UTC m=+17.165167953" watchObservedRunningTime="2024-10-09 00:53:54.535706586 +0000 UTC m=+17.165742262" Oct 9 00:53:54.567423 containerd[1440]: time="2024-10-09T00:53:54.567373453Z" level=info msg="CreateContainer within sandbox \"44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452\"" Oct 9 00:53:54.568152 containerd[1440]: time="2024-10-09T00:53:54.568122543Z" level=info msg="StartContainer for \"cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452\"" Oct 9 00:53:54.615397 systemd[1]: Started cri-containerd-cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452.scope - libcontainer container cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452. Oct 9 00:53:54.654851 containerd[1440]: time="2024-10-09T00:53:54.654790749Z" level=info msg="StartContainer for \"cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452\" returns successfully" Oct 9 00:53:54.678805 systemd[1]: cri-containerd-cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452.scope: Deactivated successfully. Oct 9 00:53:54.711498 containerd[1440]: time="2024-10-09T00:53:54.711439244Z" level=info msg="shim disconnected" id=cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452 namespace=k8s.io Oct 9 00:53:54.711865 containerd[1440]: time="2024-10-09T00:53:54.711708797Z" level=warning msg="cleaning up after shim disconnected" id=cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452 namespace=k8s.io Oct 9 00:53:54.711865 containerd[1440]: time="2024-10-09T00:53:54.711725959Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:53:55.372461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452-rootfs.mount: Deactivated successfully. Oct 9 00:53:55.524664 kubelet[2512]: E1009 00:53:55.523861 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:55.524664 kubelet[2512]: E1009 00:53:55.524037 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:55.527947 containerd[1440]: time="2024-10-09T00:53:55.527910423Z" level=info msg="CreateContainer within sandbox \"44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 9 00:53:55.545719 containerd[1440]: time="2024-10-09T00:53:55.545519990Z" level=info msg="CreateContainer within sandbox \"44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225\"" Oct 9 00:53:55.551793 containerd[1440]: time="2024-10-09T00:53:55.551745620Z" level=info msg="StartContainer for \"e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225\"" Oct 9 00:53:55.583481 systemd[1]: Started cri-containerd-e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225.scope - libcontainer container e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225. Oct 9 00:53:55.617300 containerd[1440]: time="2024-10-09T00:53:55.615834045Z" level=info msg="StartContainer for \"e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225\" returns successfully" Oct 9 00:53:55.619027 systemd[1]: cri-containerd-e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225.scope: Deactivated successfully. Oct 9 00:53:55.652482 containerd[1440]: time="2024-10-09T00:53:55.652355647Z" level=info msg="shim disconnected" id=e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225 namespace=k8s.io Oct 9 00:53:55.652482 containerd[1440]: time="2024-10-09T00:53:55.652414254Z" level=warning msg="cleaning up after shim disconnected" id=e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225 namespace=k8s.io Oct 9 00:53:55.652482 containerd[1440]: time="2024-10-09T00:53:55.652422535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:53:56.366869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225-rootfs.mount: Deactivated successfully. Oct 9 00:53:56.526948 kubelet[2512]: E1009 00:53:56.526907 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:56.531715 containerd[1440]: time="2024-10-09T00:53:56.531677675Z" level=info msg="CreateContainer within sandbox \"44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 9 00:53:56.557826 containerd[1440]: time="2024-10-09T00:53:56.557772592Z" level=info msg="CreateContainer within sandbox \"44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc\"" Oct 9 00:53:56.558308 containerd[1440]: time="2024-10-09T00:53:56.558281808Z" level=info msg="StartContainer for \"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc\"" Oct 9 00:53:56.588474 systemd[1]: Started cri-containerd-629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc.scope - libcontainer container 629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc. Oct 9 00:53:56.612899 containerd[1440]: time="2024-10-09T00:53:56.612851980Z" level=info msg="StartContainer for \"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc\" returns successfully" Oct 9 00:53:56.728452 kubelet[2512]: I1009 00:53:56.727670 2512 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 9 00:53:56.758832 systemd[1]: Created slice kubepods-burstable-pod0176b844_1160_4e1c_ab96_b541b6c2994f.slice - libcontainer container kubepods-burstable-pod0176b844_1160_4e1c_ab96_b541b6c2994f.slice. Oct 9 00:53:56.764899 systemd[1]: Created slice kubepods-burstable-podac2aab93_7495_4ef4_8762_ceefa3a22329.slice - libcontainer container kubepods-burstable-podac2aab93_7495_4ef4_8762_ceefa3a22329.slice. Oct 9 00:53:56.891339 kubelet[2512]: I1009 00:53:56.891221 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0176b844-1160-4e1c-ab96-b541b6c2994f-config-volume\") pod \"coredns-6f6b679f8f-b2c8j\" (UID: \"0176b844-1160-4e1c-ab96-b541b6c2994f\") " pod="kube-system/coredns-6f6b679f8f-b2c8j" Oct 9 00:53:56.891339 kubelet[2512]: I1009 00:53:56.891262 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdq5w\" (UniqueName: \"kubernetes.io/projected/0176b844-1160-4e1c-ab96-b541b6c2994f-kube-api-access-pdq5w\") pod \"coredns-6f6b679f8f-b2c8j\" (UID: \"0176b844-1160-4e1c-ab96-b541b6c2994f\") " pod="kube-system/coredns-6f6b679f8f-b2c8j" Oct 9 00:53:56.891339 kubelet[2512]: I1009 00:53:56.891286 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac2aab93-7495-4ef4-8762-ceefa3a22329-config-volume\") pod \"coredns-6f6b679f8f-ntvdw\" (UID: \"ac2aab93-7495-4ef4-8762-ceefa3a22329\") " pod="kube-system/coredns-6f6b679f8f-ntvdw" Oct 9 00:53:56.891551 kubelet[2512]: I1009 00:53:56.891345 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fkj9\" (UniqueName: \"kubernetes.io/projected/ac2aab93-7495-4ef4-8762-ceefa3a22329-kube-api-access-6fkj9\") pod \"coredns-6f6b679f8f-ntvdw\" (UID: \"ac2aab93-7495-4ef4-8762-ceefa3a22329\") " pod="kube-system/coredns-6f6b679f8f-ntvdw" Oct 9 00:53:57.063439 kubelet[2512]: E1009 00:53:57.063064 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:57.064157 containerd[1440]: time="2024-10-09T00:53:57.064117726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b2c8j,Uid:0176b844-1160-4e1c-ab96-b541b6c2994f,Namespace:kube-system,Attempt:0,}" Oct 9 00:53:57.067632 kubelet[2512]: E1009 00:53:57.067449 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:57.067811 containerd[1440]: time="2024-10-09T00:53:57.067771865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ntvdw,Uid:ac2aab93-7495-4ef4-8762-ceefa3a22329,Namespace:kube-system,Attempt:0,}" Oct 9 00:53:57.530859 kubelet[2512]: E1009 00:53:57.530557 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:57.544904 kubelet[2512]: I1009 00:53:57.544670 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4m8pl" podStartSLOduration=7.622425382 podStartE2EDuration="15.54465551s" podCreationTimestamp="2024-10-09 00:53:42 +0000 UTC" firstStartedPulling="2024-10-09 00:53:44.416278655 +0000 UTC m=+7.046314291" lastFinishedPulling="2024-10-09 00:53:52.338508743 +0000 UTC m=+14.968544419" observedRunningTime="2024-10-09 00:53:57.543327612 +0000 UTC m=+20.173363288" watchObservedRunningTime="2024-10-09 00:53:57.54465551 +0000 UTC m=+20.174691186" Oct 9 00:53:58.533703 kubelet[2512]: E1009 00:53:58.533665 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:58.772596 systemd-networkd[1384]: cilium_host: Link UP Oct 9 00:53:58.772732 systemd-networkd[1384]: cilium_net: Link UP Oct 9 00:53:58.772849 systemd-networkd[1384]: cilium_net: Gained carrier Oct 9 00:53:58.772954 systemd-networkd[1384]: cilium_host: Gained carrier Oct 9 00:53:58.872377 systemd-networkd[1384]: cilium_vxlan: Link UP Oct 9 00:53:58.872383 systemd-networkd[1384]: cilium_vxlan: Gained carrier Oct 9 00:53:59.026488 systemd-networkd[1384]: cilium_host: Gained IPv6LL Oct 9 00:53:59.050465 systemd-networkd[1384]: cilium_net: Gained IPv6LL Oct 9 00:53:59.196340 kernel: NET: Registered PF_ALG protocol family Oct 9 00:53:59.535116 kubelet[2512]: E1009 00:53:59.535071 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:53:59.765825 systemd-networkd[1384]: lxc_health: Link UP Oct 9 00:53:59.772957 systemd-networkd[1384]: lxc_health: Gained carrier Oct 9 00:54:00.185652 systemd-networkd[1384]: lxc3ec418708ce7: Link UP Oct 9 00:54:00.204342 kernel: eth0: renamed from tmp10d99 Oct 9 00:54:00.217341 kernel: eth0: renamed from tmp6d61d Oct 9 00:54:00.224575 systemd-networkd[1384]: lxc1a98692f324c: Link UP Oct 9 00:54:00.225081 systemd-networkd[1384]: lxc1a98692f324c: Gained carrier Oct 9 00:54:00.235504 systemd-networkd[1384]: lxc3ec418708ce7: Gained carrier Oct 9 00:54:00.537269 kubelet[2512]: E1009 00:54:00.537149 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:00.674513 systemd-networkd[1384]: cilium_vxlan: Gained IPv6LL Oct 9 00:54:00.930509 systemd-networkd[1384]: lxc_health: Gained IPv6LL Oct 9 00:54:02.146487 systemd-networkd[1384]: lxc1a98692f324c: Gained IPv6LL Oct 9 00:54:02.274430 systemd-networkd[1384]: lxc3ec418708ce7: Gained IPv6LL Oct 9 00:54:03.663642 containerd[1440]: time="2024-10-09T00:54:03.663368719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:54:03.663642 containerd[1440]: time="2024-10-09T00:54:03.663424923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:54:03.663642 containerd[1440]: time="2024-10-09T00:54:03.663440605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:54:03.663642 containerd[1440]: time="2024-10-09T00:54:03.663514731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:54:03.681217 containerd[1440]: time="2024-10-09T00:54:03.681128659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:54:03.681461 containerd[1440]: time="2024-10-09T00:54:03.681390960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:54:03.681461 containerd[1440]: time="2024-10-09T00:54:03.681417242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:54:03.681924 containerd[1440]: time="2024-10-09T00:54:03.681885119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:54:03.684643 systemd[1]: Started cri-containerd-6d61dfb44473af82ef8246fec94bbbded0b284fc07650244ba46f30f67b49533.scope - libcontainer container 6d61dfb44473af82ef8246fec94bbbded0b284fc07650244ba46f30f67b49533. Oct 9 00:54:03.697014 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:54:03.707494 systemd[1]: Started cri-containerd-10d994267eea6405aa0265855fc84b339eb7216fd37a053ab9ac8c2d2e650c2d.scope - libcontainer container 10d994267eea6405aa0265855fc84b339eb7216fd37a053ab9ac8c2d2e650c2d. Oct 9 00:54:03.717610 containerd[1440]: time="2024-10-09T00:54:03.717554212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b2c8j,Uid:0176b844-1160-4e1c-ab96-b541b6c2994f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d61dfb44473af82ef8246fec94bbbded0b284fc07650244ba46f30f67b49533\"" Oct 9 00:54:03.719830 kubelet[2512]: E1009 00:54:03.719625 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:03.720282 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:54:03.722502 containerd[1440]: time="2024-10-09T00:54:03.722453963Z" level=info msg="CreateContainer within sandbox \"6d61dfb44473af82ef8246fec94bbbded0b284fc07650244ba46f30f67b49533\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 00:54:03.737197 containerd[1440]: time="2024-10-09T00:54:03.737159899Z" level=info msg="CreateContainer within sandbox \"6d61dfb44473af82ef8246fec94bbbded0b284fc07650244ba46f30f67b49533\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1cb4365f296b34b1a8ea19f2e8b41a484f34d877584be0e57543c589129685ea\"" Oct 9 00:54:03.738510 containerd[1440]: time="2024-10-09T00:54:03.738440242Z" level=info msg="StartContainer for \"1cb4365f296b34b1a8ea19f2e8b41a484f34d877584be0e57543c589129685ea\"" Oct 9 00:54:03.740609 containerd[1440]: time="2024-10-09T00:54:03.740096334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ntvdw,Uid:ac2aab93-7495-4ef4-8762-ceefa3a22329,Namespace:kube-system,Attempt:0,} returns sandbox id \"10d994267eea6405aa0265855fc84b339eb7216fd37a053ab9ac8c2d2e650c2d\"" Oct 9 00:54:03.742131 kubelet[2512]: E1009 00:54:03.742018 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:03.745166 containerd[1440]: time="2024-10-09T00:54:03.745129617Z" level=info msg="CreateContainer within sandbox \"10d994267eea6405aa0265855fc84b339eb7216fd37a053ab9ac8c2d2e650c2d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 00:54:03.765409 containerd[1440]: time="2024-10-09T00:54:03.765341633Z" level=info msg="CreateContainer within sandbox \"10d994267eea6405aa0265855fc84b339eb7216fd37a053ab9ac8c2d2e650c2d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc89813692db266170d66451148ee2369fcd7a4efabd7b74dfa9fa8747e2832d\"" Oct 9 00:54:03.767655 containerd[1440]: time="2024-10-09T00:54:03.767620095Z" level=info msg="StartContainer for \"fc89813692db266170d66451148ee2369fcd7a4efabd7b74dfa9fa8747e2832d\"" Oct 9 00:54:03.802476 systemd[1]: Started cri-containerd-1cb4365f296b34b1a8ea19f2e8b41a484f34d877584be0e57543c589129685ea.scope - libcontainer container 1cb4365f296b34b1a8ea19f2e8b41a484f34d877584be0e57543c589129685ea. Oct 9 00:54:03.803403 systemd[1]: Started cri-containerd-fc89813692db266170d66451148ee2369fcd7a4efabd7b74dfa9fa8747e2832d.scope - libcontainer container fc89813692db266170d66451148ee2369fcd7a4efabd7b74dfa9fa8747e2832d. Oct 9 00:54:03.844326 containerd[1440]: time="2024-10-09T00:54:03.844269904Z" level=info msg="StartContainer for \"1cb4365f296b34b1a8ea19f2e8b41a484f34d877584be0e57543c589129685ea\" returns successfully" Oct 9 00:54:03.844435 containerd[1440]: time="2024-10-09T00:54:03.844297866Z" level=info msg="StartContainer for \"fc89813692db266170d66451148ee2369fcd7a4efabd7b74dfa9fa8747e2832d\" returns successfully" Oct 9 00:54:04.549263 kubelet[2512]: E1009 00:54:04.548292 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:04.551740 kubelet[2512]: E1009 00:54:04.551661 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:04.561290 kubelet[2512]: I1009 00:54:04.560541 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ntvdw" podStartSLOduration=22.559962573 podStartE2EDuration="22.559962573s" podCreationTimestamp="2024-10-09 00:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:54:04.558567986 +0000 UTC m=+27.188603662" watchObservedRunningTime="2024-10-09 00:54:04.559962573 +0000 UTC m=+27.189998249" Oct 9 00:54:04.569221 kubelet[2512]: I1009 00:54:04.569172 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-b2c8j" podStartSLOduration=22.569158839 podStartE2EDuration="22.569158839s" podCreationTimestamp="2024-10-09 00:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:54:04.568365418 +0000 UTC m=+27.198401094" watchObservedRunningTime="2024-10-09 00:54:04.569158839 +0000 UTC m=+27.199194515" Oct 9 00:54:05.552555 kubelet[2512]: E1009 00:54:05.552515 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:05.553460 kubelet[2512]: E1009 00:54:05.553088 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:06.554344 kubelet[2512]: E1009 00:54:06.554295 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:06.554773 kubelet[2512]: E1009 00:54:06.554427 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:06.683988 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:35810.service - OpenSSH per-connection server daemon (10.0.0.1:35810). Oct 9 00:54:06.720565 sshd[3932]: Accepted publickey for core from 10.0.0.1 port 35810 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:06.721955 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:06.725644 systemd-logind[1429]: New session 8 of user core. Oct 9 00:54:06.739531 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 00:54:06.861128 sshd[3932]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:06.864610 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:35810.service: Deactivated successfully. Oct 9 00:54:06.866962 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 00:54:06.867543 systemd-logind[1429]: Session 8 logged out. Waiting for processes to exit. Oct 9 00:54:06.868495 systemd-logind[1429]: Removed session 8. Oct 9 00:54:10.433894 kubelet[2512]: I1009 00:54:10.433523 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 00:54:10.434382 kubelet[2512]: E1009 00:54:10.434364 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:10.565331 kubelet[2512]: E1009 00:54:10.565263 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:11.871798 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:35812.service - OpenSSH per-connection server daemon (10.0.0.1:35812). Oct 9 00:54:11.907632 sshd[3952]: Accepted publickey for core from 10.0.0.1 port 35812 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:11.908914 sshd[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:11.912820 systemd-logind[1429]: New session 9 of user core. Oct 9 00:54:11.922572 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 00:54:12.044230 sshd[3952]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:12.048431 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:35812.service: Deactivated successfully. Oct 9 00:54:12.050354 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 00:54:12.052153 systemd-logind[1429]: Session 9 logged out. Waiting for processes to exit. Oct 9 00:54:12.053027 systemd-logind[1429]: Removed session 9. Oct 9 00:54:17.058793 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:58602.service - OpenSSH per-connection server daemon (10.0.0.1:58602). Oct 9 00:54:17.127479 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 58602 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:17.128659 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:17.132686 systemd-logind[1429]: New session 10 of user core. Oct 9 00:54:17.141444 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 00:54:17.258546 sshd[3972]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:17.271657 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:58602.service: Deactivated successfully. Oct 9 00:54:17.273015 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 00:54:17.275381 systemd-logind[1429]: Session 10 logged out. Waiting for processes to exit. Oct 9 00:54:17.283604 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:58604.service - OpenSSH per-connection server daemon (10.0.0.1:58604). Oct 9 00:54:17.285521 systemd-logind[1429]: Removed session 10. Oct 9 00:54:17.320515 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 58604 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:17.321288 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:17.325580 systemd-logind[1429]: New session 11 of user core. Oct 9 00:54:17.339454 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 00:54:17.507627 sshd[3987]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:17.527841 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:58612.service - OpenSSH per-connection server daemon (10.0.0.1:58612). Oct 9 00:54:17.528337 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:58604.service: Deactivated successfully. Oct 9 00:54:17.529811 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 00:54:17.531187 systemd-logind[1429]: Session 11 logged out. Waiting for processes to exit. Oct 9 00:54:17.531994 systemd-logind[1429]: Removed session 11. Oct 9 00:54:17.562540 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 58612 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:17.563682 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:17.567256 systemd-logind[1429]: New session 12 of user core. Oct 9 00:54:17.580506 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 00:54:17.692011 sshd[3997]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:17.695271 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:58612.service: Deactivated successfully. Oct 9 00:54:17.696928 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 00:54:17.697492 systemd-logind[1429]: Session 12 logged out. Waiting for processes to exit. Oct 9 00:54:17.698220 systemd-logind[1429]: Removed session 12. Oct 9 00:54:22.706395 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:58462.service - OpenSSH per-connection server daemon (10.0.0.1:58462). Oct 9 00:54:22.744692 sshd[4014]: Accepted publickey for core from 10.0.0.1 port 58462 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:22.746110 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:22.749738 systemd-logind[1429]: New session 13 of user core. Oct 9 00:54:22.759559 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 00:54:22.870371 sshd[4014]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:22.873792 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:58462.service: Deactivated successfully. Oct 9 00:54:22.875651 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 00:54:22.876284 systemd-logind[1429]: Session 13 logged out. Waiting for processes to exit. Oct 9 00:54:22.878750 systemd-logind[1429]: Removed session 13. Oct 9 00:54:27.880867 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:58474.service - OpenSSH per-connection server daemon (10.0.0.1:58474). Oct 9 00:54:27.920623 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 58474 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:27.921787 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:27.925736 systemd-logind[1429]: New session 14 of user core. Oct 9 00:54:27.931459 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 00:54:28.050623 sshd[4029]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:28.060147 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:58474.service: Deactivated successfully. Oct 9 00:54:28.061710 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 00:54:28.063499 systemd-logind[1429]: Session 14 logged out. Waiting for processes to exit. Oct 9 00:54:28.065210 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:58490.service - OpenSSH per-connection server daemon (10.0.0.1:58490). Oct 9 00:54:28.066927 systemd-logind[1429]: Removed session 14. Oct 9 00:54:28.100210 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 58490 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:28.101595 sshd[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:28.105950 systemd-logind[1429]: New session 15 of user core. Oct 9 00:54:28.120461 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 00:54:28.482075 sshd[4044]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:28.499739 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:58490.service: Deactivated successfully. Oct 9 00:54:28.501300 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 00:54:28.508253 systemd-logind[1429]: Session 15 logged out. Waiting for processes to exit. Oct 9 00:54:28.517596 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:58492.service - OpenSSH per-connection server daemon (10.0.0.1:58492). Oct 9 00:54:28.519205 systemd-logind[1429]: Removed session 15. Oct 9 00:54:28.549137 sshd[4056]: Accepted publickey for core from 10.0.0.1 port 58492 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:28.550304 sshd[4056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:28.555239 systemd-logind[1429]: New session 16 of user core. Oct 9 00:54:28.563455 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 00:54:29.983657 sshd[4056]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:29.997475 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:58492.service: Deactivated successfully. Oct 9 00:54:29.999021 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 00:54:30.001487 systemd-logind[1429]: Session 16 logged out. Waiting for processes to exit. Oct 9 00:54:30.011123 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:58506.service - OpenSSH per-connection server daemon (10.0.0.1:58506). Oct 9 00:54:30.013115 systemd-logind[1429]: Removed session 16. Oct 9 00:54:30.045570 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 58506 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:30.047107 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:30.051322 systemd-logind[1429]: New session 17 of user core. Oct 9 00:54:30.059459 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 00:54:30.272973 sshd[4076]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:30.283634 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:58506.service: Deactivated successfully. Oct 9 00:54:30.285716 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 00:54:30.287359 systemd-logind[1429]: Session 17 logged out. Waiting for processes to exit. Oct 9 00:54:30.295575 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:58508.service - OpenSSH per-connection server daemon (10.0.0.1:58508). Oct 9 00:54:30.296407 systemd-logind[1429]: Removed session 17. Oct 9 00:54:30.327498 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 58508 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:30.328697 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:30.332337 systemd-logind[1429]: New session 18 of user core. Oct 9 00:54:30.338531 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 00:54:30.450680 sshd[4088]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:30.455281 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:58508.service: Deactivated successfully. Oct 9 00:54:30.456921 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 00:54:30.457765 systemd-logind[1429]: Session 18 logged out. Waiting for processes to exit. Oct 9 00:54:30.458920 systemd-logind[1429]: Removed session 18. Oct 9 00:54:35.465914 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:53630.service - OpenSSH per-connection server daemon (10.0.0.1:53630). Oct 9 00:54:35.502745 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 53630 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:35.504125 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:35.508255 systemd-logind[1429]: New session 19 of user core. Oct 9 00:54:35.517532 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 00:54:35.637522 sshd[4106]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:35.640920 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 00:54:35.642130 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:53630.service: Deactivated successfully. Oct 9 00:54:35.645060 systemd-logind[1429]: Session 19 logged out. Waiting for processes to exit. Oct 9 00:54:35.646540 systemd-logind[1429]: Removed session 19. Oct 9 00:54:40.651709 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:53642.service - OpenSSH per-connection server daemon (10.0.0.1:53642). Oct 9 00:54:40.692027 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 53642 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:40.692574 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:40.697282 systemd-logind[1429]: New session 20 of user core. Oct 9 00:54:40.708462 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 00:54:40.823518 sshd[4122]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:40.826049 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:53642.service: Deactivated successfully. Oct 9 00:54:40.827779 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 00:54:40.833620 systemd-logind[1429]: Session 20 logged out. Waiting for processes to exit. Oct 9 00:54:40.835931 systemd-logind[1429]: Removed session 20. Oct 9 00:54:45.834211 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:59008.service - OpenSSH per-connection server daemon (10.0.0.1:59008). Oct 9 00:54:45.873728 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 59008 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:45.874941 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:45.879045 systemd-logind[1429]: New session 21 of user core. Oct 9 00:54:45.885468 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 00:54:45.998231 sshd[4141]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:46.011808 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:59008.service: Deactivated successfully. Oct 9 00:54:46.014887 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 00:54:46.016374 systemd-logind[1429]: Session 21 logged out. Waiting for processes to exit. Oct 9 00:54:46.025644 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:59020.service - OpenSSH per-connection server daemon (10.0.0.1:59020). Oct 9 00:54:46.026761 systemd-logind[1429]: Removed session 21. Oct 9 00:54:46.059508 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 59020 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:46.060704 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:46.064387 systemd-logind[1429]: New session 22 of user core. Oct 9 00:54:46.080466 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 00:54:48.186455 containerd[1440]: time="2024-10-09T00:54:48.186385310Z" level=info msg="StopContainer for \"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99\" with timeout 30 (s)" Oct 9 00:54:48.187576 containerd[1440]: time="2024-10-09T00:54:48.187187499Z" level=info msg="Stop container \"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99\" with signal terminated" Oct 9 00:54:48.199521 systemd[1]: cri-containerd-9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99.scope: Deactivated successfully. Oct 9 00:54:48.220675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99-rootfs.mount: Deactivated successfully. Oct 9 00:54:48.229244 containerd[1440]: time="2024-10-09T00:54:48.229203654Z" level=info msg="StopContainer for \"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc\" with timeout 2 (s)" Oct 9 00:54:48.229481 containerd[1440]: time="2024-10-09T00:54:48.229444851Z" level=info msg="Stop container \"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc\" with signal terminated" Oct 9 00:54:48.229805 containerd[1440]: time="2024-10-09T00:54:48.229745807Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 00:54:48.231019 containerd[1440]: time="2024-10-09T00:54:48.230979431Z" level=info msg="shim disconnected" id=9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99 namespace=k8s.io Oct 9 00:54:48.231019 containerd[1440]: time="2024-10-09T00:54:48.231017510Z" level=warning msg="cleaning up after shim disconnected" id=9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99 namespace=k8s.io Oct 9 00:54:48.231117 containerd[1440]: time="2024-10-09T00:54:48.231024550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:54:48.236467 systemd-networkd[1384]: lxc_health: Link DOWN Oct 9 00:54:48.236477 systemd-networkd[1384]: lxc_health: Lost carrier Oct 9 00:54:48.247542 containerd[1440]: time="2024-10-09T00:54:48.247474249Z" level=info msg="StopContainer for \"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99\" returns successfully" Oct 9 00:54:48.249904 containerd[1440]: time="2024-10-09T00:54:48.249860257Z" level=info msg="StopPodSandbox for \"3ce08b925e0d1ea5455994dc5776027f74932523802aa8959250b9fdeb81c30c\"" Oct 9 00:54:48.249979 containerd[1440]: time="2024-10-09T00:54:48.249909896Z" level=info msg="Container to stop \"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:54:48.251556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ce08b925e0d1ea5455994dc5776027f74932523802aa8959250b9fdeb81c30c-shm.mount: Deactivated successfully. Oct 9 00:54:48.257728 systemd[1]: cri-containerd-3ce08b925e0d1ea5455994dc5776027f74932523802aa8959250b9fdeb81c30c.scope: Deactivated successfully. Oct 9 00:54:48.260811 systemd[1]: cri-containerd-629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc.scope: Deactivated successfully. Oct 9 00:54:48.261199 systemd[1]: cri-containerd-629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc.scope: Consumed 6.402s CPU time. Oct 9 00:54:48.279028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc-rootfs.mount: Deactivated successfully. Oct 9 00:54:48.285845 containerd[1440]: time="2024-10-09T00:54:48.285789374Z" level=info msg="shim disconnected" id=629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc namespace=k8s.io Oct 9 00:54:48.285845 containerd[1440]: time="2024-10-09T00:54:48.285842573Z" level=warning msg="cleaning up after shim disconnected" id=629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc namespace=k8s.io Oct 9 00:54:48.285845 containerd[1440]: time="2024-10-09T00:54:48.285850293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:54:48.294274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ce08b925e0d1ea5455994dc5776027f74932523802aa8959250b9fdeb81c30c-rootfs.mount: Deactivated successfully. Oct 9 00:54:48.297489 containerd[1440]: time="2024-10-09T00:54:48.297393938Z" level=info msg="shim disconnected" id=3ce08b925e0d1ea5455994dc5776027f74932523802aa8959250b9fdeb81c30c namespace=k8s.io Oct 9 00:54:48.297489 containerd[1440]: time="2024-10-09T00:54:48.297454377Z" level=warning msg="cleaning up after shim disconnected" id=3ce08b925e0d1ea5455994dc5776027f74932523802aa8959250b9fdeb81c30c namespace=k8s.io Oct 9 00:54:48.297489 containerd[1440]: time="2024-10-09T00:54:48.297462577Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:54:48.300306 containerd[1440]: time="2024-10-09T00:54:48.300272699Z" level=info msg="StopContainer for \"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc\" returns successfully" Oct 9 00:54:48.300772 containerd[1440]: time="2024-10-09T00:54:48.300742893Z" level=info msg="StopPodSandbox for \"44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840\"" Oct 9 00:54:48.300818 containerd[1440]: time="2024-10-09T00:54:48.300784492Z" level=info msg="Container to stop \"db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:54:48.300818 containerd[1440]: time="2024-10-09T00:54:48.300800452Z" level=info msg="Container to stop \"82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:54:48.300818 containerd[1440]: time="2024-10-09T00:54:48.300808692Z" level=info msg="Container to stop \"e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:54:48.300818 containerd[1440]: time="2024-10-09T00:54:48.300817412Z" level=info msg="Container to stop \"cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:54:48.300911 containerd[1440]: time="2024-10-09T00:54:48.300825732Z" level=info msg="Container to stop \"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:54:48.306636 systemd[1]: cri-containerd-44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840.scope: Deactivated successfully. Oct 9 00:54:48.313129 containerd[1440]: time="2024-10-09T00:54:48.312974649Z" level=info msg="TearDown network for sandbox \"3ce08b925e0d1ea5455994dc5776027f74932523802aa8959250b9fdeb81c30c\" successfully" Oct 9 00:54:48.313129 containerd[1440]: time="2024-10-09T00:54:48.313006128Z" level=info msg="StopPodSandbox for \"3ce08b925e0d1ea5455994dc5776027f74932523802aa8959250b9fdeb81c30c\" returns successfully" Oct 9 00:54:48.339696 containerd[1440]: time="2024-10-09T00:54:48.339640570Z" level=info msg="shim disconnected" id=44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840 namespace=k8s.io Oct 9 00:54:48.340089 containerd[1440]: time="2024-10-09T00:54:48.339932966Z" level=warning msg="cleaning up after shim disconnected" id=44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840 namespace=k8s.io Oct 9 00:54:48.340089 containerd[1440]: time="2024-10-09T00:54:48.339953406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:54:48.354433 containerd[1440]: time="2024-10-09T00:54:48.354354292Z" level=info msg="TearDown network for sandbox \"44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840\" successfully" Oct 9 00:54:48.354433 containerd[1440]: time="2024-10-09T00:54:48.354388612Z" level=info msg="StopPodSandbox for \"44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840\" returns successfully" Oct 9 00:54:48.484475 kubelet[2512]: I1009 00:54:48.483460 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-hostproc\") pod \"e7697970-fccd-4ef0-985f-603ec2eb0704\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " Oct 9 00:54:48.484475 kubelet[2512]: I1009 00:54:48.483505 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-cilium-run\") pod \"e7697970-fccd-4ef0-985f-603ec2eb0704\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " Oct 9 00:54:48.484475 kubelet[2512]: I1009 00:54:48.483523 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-etc-cni-netd\") pod \"e7697970-fccd-4ef0-985f-603ec2eb0704\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " Oct 9 00:54:48.484475 kubelet[2512]: I1009 00:54:48.483540 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-xtables-lock\") pod \"e7697970-fccd-4ef0-985f-603ec2eb0704\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " Oct 9 00:54:48.484475 kubelet[2512]: I1009 00:54:48.483555 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-bpf-maps\") pod \"e7697970-fccd-4ef0-985f-603ec2eb0704\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " Oct 9 00:54:48.484475 kubelet[2512]: I1009 00:54:48.483572 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-lib-modules\") pod \"e7697970-fccd-4ef0-985f-603ec2eb0704\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " Oct 9 00:54:48.484940 kubelet[2512]: I1009 00:54:48.483588 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-host-proc-sys-kernel\") pod \"e7697970-fccd-4ef0-985f-603ec2eb0704\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " Oct 9 00:54:48.484940 kubelet[2512]: I1009 00:54:48.483608 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-host-proc-sys-net\") pod \"e7697970-fccd-4ef0-985f-603ec2eb0704\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " Oct 9 00:54:48.484940 kubelet[2512]: I1009 00:54:48.483624 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-cilium-cgroup\") pod \"e7697970-fccd-4ef0-985f-603ec2eb0704\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " Oct 9 00:54:48.484940 kubelet[2512]: I1009 00:54:48.483645 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjbch\" (UniqueName: \"kubernetes.io/projected/b092076b-b181-4c59-b908-935cbb7c7037-kube-api-access-xjbch\") pod \"b092076b-b181-4c59-b908-935cbb7c7037\" (UID: \"b092076b-b181-4c59-b908-935cbb7c7037\") " Oct 9 00:54:48.484940 kubelet[2512]: I1009 00:54:48.483663 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e7697970-fccd-4ef0-985f-603ec2eb0704-clustermesh-secrets\") pod \"e7697970-fccd-4ef0-985f-603ec2eb0704\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " Oct 9 00:54:48.484940 kubelet[2512]: I1009 00:54:48.483682 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7697970-fccd-4ef0-985f-603ec2eb0704-cilium-config-path\") pod \"e7697970-fccd-4ef0-985f-603ec2eb0704\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " Oct 9 00:54:48.485068 kubelet[2512]: I1009 00:54:48.483700 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mthjn\" (UniqueName: \"kubernetes.io/projected/e7697970-fccd-4ef0-985f-603ec2eb0704-kube-api-access-mthjn\") pod \"e7697970-fccd-4ef0-985f-603ec2eb0704\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " Oct 9 00:54:48.485068 kubelet[2512]: I1009 00:54:48.483716 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e7697970-fccd-4ef0-985f-603ec2eb0704-hubble-tls\") pod \"e7697970-fccd-4ef0-985f-603ec2eb0704\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " Oct 9 00:54:48.485068 kubelet[2512]: I1009 00:54:48.483732 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-cni-path\") pod \"e7697970-fccd-4ef0-985f-603ec2eb0704\" (UID: \"e7697970-fccd-4ef0-985f-603ec2eb0704\") " Oct 9 00:54:48.485068 kubelet[2512]: I1009 00:54:48.483749 2512 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b092076b-b181-4c59-b908-935cbb7c7037-cilium-config-path\") pod \"b092076b-b181-4c59-b908-935cbb7c7037\" (UID: \"b092076b-b181-4c59-b908-935cbb7c7037\") " Oct 9 00:54:48.486024 kubelet[2512]: I1009 00:54:48.485618 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-hostproc" (OuterVolumeSpecName: "hostproc") pod "e7697970-fccd-4ef0-985f-603ec2eb0704" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:54:48.486250 kubelet[2512]: I1009 00:54:48.485633 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e7697970-fccd-4ef0-985f-603ec2eb0704" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:54:48.486250 kubelet[2512]: I1009 00:54:48.486168 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e7697970-fccd-4ef0-985f-603ec2eb0704" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:54:48.486250 kubelet[2512]: I1009 00:54:48.486223 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e7697970-fccd-4ef0-985f-603ec2eb0704" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:54:48.486250 kubelet[2512]: I1009 00:54:48.486240 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e7697970-fccd-4ef0-985f-603ec2eb0704" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:54:48.486521 kubelet[2512]: I1009 00:54:48.486254 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e7697970-fccd-4ef0-985f-603ec2eb0704" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:54:48.486521 kubelet[2512]: I1009 00:54:48.486269 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e7697970-fccd-4ef0-985f-603ec2eb0704" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:54:48.486521 kubelet[2512]: I1009 00:54:48.486283 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e7697970-fccd-4ef0-985f-603ec2eb0704" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:54:48.486521 kubelet[2512]: I1009 00:54:48.486122 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e7697970-fccd-4ef0-985f-603ec2eb0704" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:54:48.488404 kubelet[2512]: I1009 00:54:48.488304 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-cni-path" (OuterVolumeSpecName: "cni-path") pod "e7697970-fccd-4ef0-985f-603ec2eb0704" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:54:48.489611 kubelet[2512]: I1009 00:54:48.489514 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7697970-fccd-4ef0-985f-603ec2eb0704-kube-api-access-mthjn" (OuterVolumeSpecName: "kube-api-access-mthjn") pod "e7697970-fccd-4ef0-985f-603ec2eb0704" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704"). InnerVolumeSpecName "kube-api-access-mthjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 00:54:48.489909 kubelet[2512]: I1009 00:54:48.489878 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7697970-fccd-4ef0-985f-603ec2eb0704-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e7697970-fccd-4ef0-985f-603ec2eb0704" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 00:54:48.490138 kubelet[2512]: I1009 00:54:48.490104 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b092076b-b181-4c59-b908-935cbb7c7037-kube-api-access-xjbch" (OuterVolumeSpecName: "kube-api-access-xjbch") pod "b092076b-b181-4c59-b908-935cbb7c7037" (UID: "b092076b-b181-4c59-b908-935cbb7c7037"). InnerVolumeSpecName "kube-api-access-xjbch". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 00:54:48.490192 kubelet[2512]: I1009 00:54:48.490183 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7697970-fccd-4ef0-985f-603ec2eb0704-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e7697970-fccd-4ef0-985f-603ec2eb0704" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 00:54:48.490929 kubelet[2512]: I1009 00:54:48.490903 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7697970-fccd-4ef0-985f-603ec2eb0704-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e7697970-fccd-4ef0-985f-603ec2eb0704" (UID: "e7697970-fccd-4ef0-985f-603ec2eb0704"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 00:54:48.491730 kubelet[2512]: I1009 00:54:48.491698 2512 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b092076b-b181-4c59-b908-935cbb7c7037-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b092076b-b181-4c59-b908-935cbb7c7037" (UID: "b092076b-b181-4c59-b908-935cbb7c7037"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 00:54:48.584018 kubelet[2512]: I1009 00:54:48.583951 2512 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584018 kubelet[2512]: I1009 00:54:48.583991 2512 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584018 kubelet[2512]: I1009 00:54:48.584000 2512 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584018 kubelet[2512]: I1009 00:54:48.584009 2512 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584018 kubelet[2512]: I1009 00:54:48.584018 2512 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584018 kubelet[2512]: I1009 00:54:48.584026 2512 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584018 kubelet[2512]: I1009 00:54:48.584036 2512 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584281 kubelet[2512]: I1009 00:54:48.584044 2512 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584281 kubelet[2512]: I1009 00:54:48.584051 2512 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584281 kubelet[2512]: I1009 00:54:48.584060 2512 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xjbch\" (UniqueName: \"kubernetes.io/projected/b092076b-b181-4c59-b908-935cbb7c7037-kube-api-access-xjbch\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584281 kubelet[2512]: I1009 00:54:48.584069 2512 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e7697970-fccd-4ef0-985f-603ec2eb0704-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584281 kubelet[2512]: I1009 00:54:48.584076 2512 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7697970-fccd-4ef0-985f-603ec2eb0704-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584281 kubelet[2512]: I1009 00:54:48.584084 2512 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mthjn\" (UniqueName: \"kubernetes.io/projected/e7697970-fccd-4ef0-985f-603ec2eb0704-kube-api-access-mthjn\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584281 kubelet[2512]: I1009 00:54:48.584092 2512 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e7697970-fccd-4ef0-985f-603ec2eb0704-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584281 kubelet[2512]: I1009 00:54:48.584101 2512 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e7697970-fccd-4ef0-985f-603ec2eb0704-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.584476 kubelet[2512]: I1009 00:54:48.584109 2512 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b092076b-b181-4c59-b908-935cbb7c7037-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 9 00:54:48.650457 systemd[1]: Removed slice kubepods-besteffort-podb092076b_b181_4c59_b908_935cbb7c7037.slice - libcontainer container kubepods-besteffort-podb092076b_b181_4c59_b908_935cbb7c7037.slice. Oct 9 00:54:48.651001 kubelet[2512]: I1009 00:54:48.650577 2512 scope.go:117] "RemoveContainer" containerID="629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc" Oct 9 00:54:48.653011 containerd[1440]: time="2024-10-09T00:54:48.652978119Z" level=info msg="RemoveContainer for \"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc\"" Oct 9 00:54:48.656918 systemd[1]: Removed slice kubepods-burstable-pode7697970_fccd_4ef0_985f_603ec2eb0704.slice - libcontainer container kubepods-burstable-pode7697970_fccd_4ef0_985f_603ec2eb0704.slice. Oct 9 00:54:48.657048 systemd[1]: kubepods-burstable-pode7697970_fccd_4ef0_985f_603ec2eb0704.slice: Consumed 6.554s CPU time. Oct 9 00:54:48.659927 containerd[1440]: time="2024-10-09T00:54:48.659248515Z" level=info msg="RemoveContainer for \"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc\" returns successfully" Oct 9 00:54:48.660006 kubelet[2512]: I1009 00:54:48.659691 2512 scope.go:117] "RemoveContainer" containerID="e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225" Oct 9 00:54:48.661073 containerd[1440]: time="2024-10-09T00:54:48.660887733Z" level=info msg="RemoveContainer for \"e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225\"" Oct 9 00:54:48.664052 containerd[1440]: time="2024-10-09T00:54:48.664021891Z" level=info msg="RemoveContainer for \"e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225\" returns successfully" Oct 9 00:54:48.664295 kubelet[2512]: I1009 00:54:48.664231 2512 scope.go:117] "RemoveContainer" containerID="cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452" Oct 9 00:54:48.665543 containerd[1440]: time="2024-10-09T00:54:48.665520671Z" level=info msg="RemoveContainer for \"cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452\"" Oct 9 00:54:48.668643 containerd[1440]: time="2024-10-09T00:54:48.668594069Z" level=info msg="RemoveContainer for \"cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452\" returns successfully" Oct 9 00:54:48.668800 kubelet[2512]: I1009 00:54:48.668774 2512 scope.go:117] "RemoveContainer" containerID="82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64" Oct 9 00:54:48.669785 containerd[1440]: time="2024-10-09T00:54:48.669761214Z" level=info msg="RemoveContainer for \"82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64\"" Oct 9 00:54:48.672040 containerd[1440]: time="2024-10-09T00:54:48.671999943Z" level=info msg="RemoveContainer for \"82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64\" returns successfully" Oct 9 00:54:48.672421 kubelet[2512]: I1009 00:54:48.672396 2512 scope.go:117] "RemoveContainer" containerID="db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17" Oct 9 00:54:48.674571 containerd[1440]: time="2024-10-09T00:54:48.673654281Z" level=info msg="RemoveContainer for \"db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17\"" Oct 9 00:54:48.676126 containerd[1440]: time="2024-10-09T00:54:48.676093248Z" level=info msg="RemoveContainer for \"db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17\" returns successfully" Oct 9 00:54:48.676275 kubelet[2512]: I1009 00:54:48.676252 2512 scope.go:117] "RemoveContainer" containerID="629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc" Oct 9 00:54:48.676480 containerd[1440]: time="2024-10-09T00:54:48.676447564Z" level=error msg="ContainerStatus for \"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc\": not found" Oct 9 00:54:48.678188 kubelet[2512]: E1009 00:54:48.678160 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc\": not found" containerID="629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc" Oct 9 00:54:48.678969 kubelet[2512]: I1009 00:54:48.678199 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc"} err="failed to get container status \"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc\": rpc error: code = NotFound desc = an error occurred when try to find container \"629d571be176cbfea24a5c90e9d24ac740944bf08409eb239aa804e7b7bfccbc\": not found" Oct 9 00:54:48.679042 kubelet[2512]: I1009 00:54:48.678972 2512 scope.go:117] "RemoveContainer" containerID="e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225" Oct 9 00:54:48.679293 containerd[1440]: time="2024-10-09T00:54:48.679263206Z" level=error msg="ContainerStatus for \"e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225\": not found" Oct 9 00:54:48.679550 kubelet[2512]: E1009 00:54:48.679417 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225\": not found" containerID="e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225" Oct 9 00:54:48.679550 kubelet[2512]: I1009 00:54:48.679442 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225"} err="failed to get container status \"e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7e55cb6147c4cd9e9bdd9dd6fb60906be4f88020b209bb33324e437d11ff225\": not found" Oct 9 00:54:48.679550 kubelet[2512]: I1009 00:54:48.679459 2512 scope.go:117] "RemoveContainer" containerID="cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452" Oct 9 00:54:48.679642 containerd[1440]: time="2024-10-09T00:54:48.679607601Z" level=error msg="ContainerStatus for \"cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452\": not found" Oct 9 00:54:48.679729 kubelet[2512]: E1009 00:54:48.679707 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452\": not found" containerID="cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452" Oct 9 00:54:48.679784 kubelet[2512]: I1009 00:54:48.679730 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452"} err="failed to get container status \"cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd694b68027a78ee1fb78a574fba1c8f42e6ee04f792cb7962d1b5f7a0d9f452\": not found" Oct 9 00:54:48.679784 kubelet[2512]: I1009 00:54:48.679744 2512 scope.go:117] "RemoveContainer" containerID="82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64" Oct 9 00:54:48.679908 containerd[1440]: time="2024-10-09T00:54:48.679882078Z" level=error msg="ContainerStatus for \"82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64\": not found" Oct 9 00:54:48.680029 kubelet[2512]: E1009 00:54:48.680009 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64\": not found" containerID="82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64" Oct 9 00:54:48.680058 kubelet[2512]: I1009 00:54:48.680029 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64"} err="failed to get container status \"82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64\": rpc error: code = NotFound desc = an error occurred when try to find container \"82d2f734445a57a03ba12178747a8258f88986679fdcaea53427cf2c93df9f64\": not found" Oct 9 00:54:48.680058 kubelet[2512]: I1009 00:54:48.680041 2512 scope.go:117] "RemoveContainer" containerID="db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17" Oct 9 00:54:48.680164 containerd[1440]: time="2024-10-09T00:54:48.680143794Z" level=error msg="ContainerStatus for \"db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17\": not found" Oct 9 00:54:48.680234 kubelet[2512]: E1009 00:54:48.680218 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17\": not found" containerID="db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17" Oct 9 00:54:48.680266 kubelet[2512]: I1009 00:54:48.680237 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17"} err="failed to get container status \"db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17\": rpc error: code = NotFound desc = an error occurred when try to find container \"db0bc09cd4b94f703b182d50739300e5cf158dd86b9e98d0786a9583bb6c7f17\": not found" Oct 9 00:54:48.680266 kubelet[2512]: I1009 00:54:48.680256 2512 scope.go:117] "RemoveContainer" containerID="9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99" Oct 9 00:54:48.681078 containerd[1440]: time="2024-10-09T00:54:48.681049302Z" level=info msg="RemoveContainer for \"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99\"" Oct 9 00:54:48.689194 containerd[1440]: time="2024-10-09T00:54:48.689154313Z" level=info msg="RemoveContainer for \"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99\" returns successfully" Oct 9 00:54:48.689360 kubelet[2512]: I1009 00:54:48.689324 2512 scope.go:117] "RemoveContainer" containerID="9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99" Oct 9 00:54:48.689536 containerd[1440]: time="2024-10-09T00:54:48.689514908Z" level=error msg="ContainerStatus for \"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99\": not found" Oct 9 00:54:48.689629 kubelet[2512]: E1009 00:54:48.689611 2512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99\": not found" containerID="9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99" Oct 9 00:54:48.689678 kubelet[2512]: I1009 00:54:48.689633 2512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99"} err="failed to get container status \"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f1d31ea852e425728be10bf3704c82cef8a8cc3066483f7007bf04505799c99\": not found" Oct 9 00:54:49.205562 systemd[1]: var-lib-kubelet-pods-b092076b\x2db181\x2d4c59\x2db908\x2d935cbb7c7037-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxjbch.mount: Deactivated successfully. Oct 9 00:54:49.205670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840-rootfs.mount: Deactivated successfully. Oct 9 00:54:49.205727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44e86e097d17e851f9263e844d35dc87628bec5c9c35cc47811293875acf4840-shm.mount: Deactivated successfully. Oct 9 00:54:49.205795 systemd[1]: var-lib-kubelet-pods-e7697970\x2dfccd\x2d4ef0\x2d985f\x2d603ec2eb0704-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmthjn.mount: Deactivated successfully. Oct 9 00:54:49.205849 systemd[1]: var-lib-kubelet-pods-e7697970\x2dfccd\x2d4ef0\x2d985f\x2d603ec2eb0704-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 9 00:54:49.205896 systemd[1]: var-lib-kubelet-pods-e7697970\x2dfccd\x2d4ef0\x2d985f\x2d603ec2eb0704-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 9 00:54:49.450664 kubelet[2512]: I1009 00:54:49.448943 2512 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b092076b-b181-4c59-b908-935cbb7c7037" path="/var/lib/kubelet/pods/b092076b-b181-4c59-b908-935cbb7c7037/volumes" Oct 9 00:54:49.451157 kubelet[2512]: I1009 00:54:49.451115 2512 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7697970-fccd-4ef0-985f-603ec2eb0704" path="/var/lib/kubelet/pods/e7697970-fccd-4ef0-985f-603ec2eb0704/volumes" Oct 9 00:54:50.150988 sshd[4156]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:50.159592 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:59020.service: Deactivated successfully. Oct 9 00:54:50.161193 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 00:54:50.161390 systemd[1]: session-22.scope: Consumed 1.430s CPU time. Oct 9 00:54:50.162527 systemd-logind[1429]: Session 22 logged out. Waiting for processes to exit. Oct 9 00:54:50.172641 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:59022.service - OpenSSH per-connection server daemon (10.0.0.1:59022). Oct 9 00:54:50.173399 systemd-logind[1429]: Removed session 22. Oct 9 00:54:50.205796 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 59022 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:50.207206 sshd[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:50.212801 systemd-logind[1429]: New session 23 of user core. Oct 9 00:54:50.221517 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 00:54:51.217970 sshd[4320]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:51.226401 kubelet[2512]: E1009 00:54:51.225964 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e7697970-fccd-4ef0-985f-603ec2eb0704" containerName="apply-sysctl-overwrites" Oct 9 00:54:51.226401 kubelet[2512]: E1009 00:54:51.225991 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e7697970-fccd-4ef0-985f-603ec2eb0704" containerName="clean-cilium-state" Oct 9 00:54:51.226401 kubelet[2512]: E1009 00:54:51.225997 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e7697970-fccd-4ef0-985f-603ec2eb0704" containerName="mount-cgroup" Oct 9 00:54:51.226401 kubelet[2512]: E1009 00:54:51.226003 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b092076b-b181-4c59-b908-935cbb7c7037" containerName="cilium-operator" Oct 9 00:54:51.226401 kubelet[2512]: E1009 00:54:51.226009 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e7697970-fccd-4ef0-985f-603ec2eb0704" containerName="mount-bpf-fs" Oct 9 00:54:51.226401 kubelet[2512]: E1009 00:54:51.226015 2512 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e7697970-fccd-4ef0-985f-603ec2eb0704" containerName="cilium-agent" Oct 9 00:54:51.226401 kubelet[2512]: I1009 00:54:51.226039 2512 memory_manager.go:354] "RemoveStaleState removing state" podUID="b092076b-b181-4c59-b908-935cbb7c7037" containerName="cilium-operator" Oct 9 00:54:51.226401 kubelet[2512]: I1009 00:54:51.226045 2512 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7697970-fccd-4ef0-985f-603ec2eb0704" containerName="cilium-agent" Oct 9 00:54:51.225496 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:59022.service: Deactivated successfully. Oct 9 00:54:51.227023 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 00:54:51.229109 systemd-logind[1429]: Session 23 logged out. Waiting for processes to exit. Oct 9 00:54:51.242586 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:59024.service - OpenSSH per-connection server daemon (10.0.0.1:59024). Oct 9 00:54:51.246007 systemd-logind[1429]: Removed session 23. Oct 9 00:54:51.252606 systemd[1]: Created slice kubepods-burstable-pod6c06316a_fb16_4fe7_b4b8_74dd19c35f36.slice - libcontainer container kubepods-burstable-pod6c06316a_fb16_4fe7_b4b8_74dd19c35f36.slice. Oct 9 00:54:51.274495 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 59024 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:51.275749 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:51.279440 systemd-logind[1429]: New session 24 of user core. Oct 9 00:54:51.289522 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 00:54:51.299237 kubelet[2512]: I1009 00:54:51.299196 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-cilium-config-path\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.299330 kubelet[2512]: I1009 00:54:51.299244 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-host-proc-sys-net\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.299330 kubelet[2512]: I1009 00:54:51.299265 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-cilium-run\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.299330 kubelet[2512]: I1009 00:54:51.299280 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-cilium-cgroup\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.299330 kubelet[2512]: I1009 00:54:51.299297 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-host-proc-sys-kernel\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.299330 kubelet[2512]: I1009 00:54:51.299320 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqq9s\" (UniqueName: \"kubernetes.io/projected/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-kube-api-access-kqq9s\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.299436 kubelet[2512]: I1009 00:54:51.299340 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-cni-path\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.299436 kubelet[2512]: I1009 00:54:51.299356 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-lib-modules\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.299436 kubelet[2512]: I1009 00:54:51.299372 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-bpf-maps\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.299436 kubelet[2512]: I1009 00:54:51.299386 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-etc-cni-netd\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.299436 kubelet[2512]: I1009 00:54:51.299401 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-hostproc\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.299436 kubelet[2512]: I1009 00:54:51.299418 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-xtables-lock\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.299556 kubelet[2512]: I1009 00:54:51.299443 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-hubble-tls\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.299556 kubelet[2512]: I1009 00:54:51.299458 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-clustermesh-secrets\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.299556 kubelet[2512]: I1009 00:54:51.299473 2512 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6c06316a-fb16-4fe7-b4b8-74dd19c35f36-cilium-ipsec-secrets\") pod \"cilium-prfht\" (UID: \"6c06316a-fb16-4fe7-b4b8-74dd19c35f36\") " pod="kube-system/cilium-prfht" Oct 9 00:54:51.338058 sshd[4333]: pam_unix(sshd:session): session closed for user core Oct 9 00:54:51.346674 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:59024.service: Deactivated successfully. Oct 9 00:54:51.348179 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 00:54:51.349464 systemd-logind[1429]: Session 24 logged out. Waiting for processes to exit. Oct 9 00:54:51.359638 systemd[1]: Started sshd@24-10.0.0.92:22-10.0.0.1:59030.service - OpenSSH per-connection server daemon (10.0.0.1:59030). Oct 9 00:54:51.360794 systemd-logind[1429]: Removed session 24. Oct 9 00:54:51.389707 sshd[4341]: Accepted publickey for core from 10.0.0.1 port 59030 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:54:51.390834 sshd[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:54:51.393915 systemd-logind[1429]: New session 25 of user core. Oct 9 00:54:51.404448 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 00:54:51.558186 kubelet[2512]: E1009 00:54:51.558070 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:51.558623 containerd[1440]: time="2024-10-09T00:54:51.558577972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-prfht,Uid:6c06316a-fb16-4fe7-b4b8-74dd19c35f36,Namespace:kube-system,Attempt:0,}" Oct 9 00:54:51.575443 containerd[1440]: time="2024-10-09T00:54:51.575032367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:54:51.575610 containerd[1440]: time="2024-10-09T00:54:51.575456043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:54:51.575610 containerd[1440]: time="2024-10-09T00:54:51.575476123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:54:51.575610 containerd[1440]: time="2024-10-09T00:54:51.575554162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:54:51.596893 systemd[1]: Started cri-containerd-80f45bce2d3eb147301b557026a290ec8ff85ab12cae6840544d35db1a8e6d46.scope - libcontainer container 80f45bce2d3eb147301b557026a290ec8ff85ab12cae6840544d35db1a8e6d46. Oct 9 00:54:51.619031 containerd[1440]: time="2024-10-09T00:54:51.618810008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-prfht,Uid:6c06316a-fb16-4fe7-b4b8-74dd19c35f36,Namespace:kube-system,Attempt:0,} returns sandbox id \"80f45bce2d3eb147301b557026a290ec8ff85ab12cae6840544d35db1a8e6d46\"" Oct 9 00:54:51.620360 kubelet[2512]: E1009 00:54:51.620156 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:51.622467 containerd[1440]: time="2024-10-09T00:54:51.622414692Z" level=info msg="CreateContainer within sandbox \"80f45bce2d3eb147301b557026a290ec8ff85ab12cae6840544d35db1a8e6d46\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 9 00:54:51.631838 containerd[1440]: time="2024-10-09T00:54:51.631788078Z" level=info msg="CreateContainer within sandbox \"80f45bce2d3eb147301b557026a290ec8ff85ab12cae6840544d35db1a8e6d46\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"886af6a6b6e150735fac459a7f57c25f273d300939cf564c61f6845d99164d09\"" Oct 9 00:54:51.632504 containerd[1440]: time="2024-10-09T00:54:51.632400152Z" level=info msg="StartContainer for \"886af6a6b6e150735fac459a7f57c25f273d300939cf564c61f6845d99164d09\"" Oct 9 00:54:51.665492 systemd[1]: Started cri-containerd-886af6a6b6e150735fac459a7f57c25f273d300939cf564c61f6845d99164d09.scope - libcontainer container 886af6a6b6e150735fac459a7f57c25f273d300939cf564c61f6845d99164d09. Oct 9 00:54:51.697902 systemd[1]: cri-containerd-886af6a6b6e150735fac459a7f57c25f273d300939cf564c61f6845d99164d09.scope: Deactivated successfully. Oct 9 00:54:51.708393 containerd[1440]: time="2024-10-09T00:54:51.708348790Z" level=info msg="StartContainer for \"886af6a6b6e150735fac459a7f57c25f273d300939cf564c61f6845d99164d09\" returns successfully" Oct 9 00:54:51.728477 containerd[1440]: time="2024-10-09T00:54:51.728418309Z" level=info msg="shim disconnected" id=886af6a6b6e150735fac459a7f57c25f273d300939cf564c61f6845d99164d09 namespace=k8s.io Oct 9 00:54:51.728477 containerd[1440]: time="2024-10-09T00:54:51.728471588Z" level=warning msg="cleaning up after shim disconnected" id=886af6a6b6e150735fac459a7f57c25f273d300939cf564c61f6845d99164d09 namespace=k8s.io Oct 9 00:54:51.728477 containerd[1440]: time="2024-10-09T00:54:51.728480068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:54:52.502092 kubelet[2512]: E1009 00:54:52.502045 2512 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 9 00:54:52.677264 kubelet[2512]: E1009 00:54:52.676367 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:52.678255 containerd[1440]: time="2024-10-09T00:54:52.678097508Z" level=info msg="CreateContainer within sandbox \"80f45bce2d3eb147301b557026a290ec8ff85ab12cae6840544d35db1a8e6d46\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 9 00:54:52.691297 containerd[1440]: time="2024-10-09T00:54:52.691248390Z" level=info msg="CreateContainer within sandbox \"80f45bce2d3eb147301b557026a290ec8ff85ab12cae6840544d35db1a8e6d46\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9de54be4254cc4a411b4dc96d9ffaae2dbf45cbdab853a2355d32501462ae225\"" Oct 9 00:54:52.692334 containerd[1440]: time="2024-10-09T00:54:52.691911144Z" level=info msg="StartContainer for \"9de54be4254cc4a411b4dc96d9ffaae2dbf45cbdab853a2355d32501462ae225\"" Oct 9 00:54:52.725482 systemd[1]: Started cri-containerd-9de54be4254cc4a411b4dc96d9ffaae2dbf45cbdab853a2355d32501462ae225.scope - libcontainer container 9de54be4254cc4a411b4dc96d9ffaae2dbf45cbdab853a2355d32501462ae225. Oct 9 00:54:52.746144 containerd[1440]: time="2024-10-09T00:54:52.746102379Z" level=info msg="StartContainer for \"9de54be4254cc4a411b4dc96d9ffaae2dbf45cbdab853a2355d32501462ae225\" returns successfully" Oct 9 00:54:52.752831 systemd[1]: cri-containerd-9de54be4254cc4a411b4dc96d9ffaae2dbf45cbdab853a2355d32501462ae225.scope: Deactivated successfully. Oct 9 00:54:52.772322 containerd[1440]: time="2024-10-09T00:54:52.772268184Z" level=info msg="shim disconnected" id=9de54be4254cc4a411b4dc96d9ffaae2dbf45cbdab853a2355d32501462ae225 namespace=k8s.io Oct 9 00:54:52.772484 containerd[1440]: time="2024-10-09T00:54:52.772331424Z" level=warning msg="cleaning up after shim disconnected" id=9de54be4254cc4a411b4dc96d9ffaae2dbf45cbdab853a2355d32501462ae225 namespace=k8s.io Oct 9 00:54:52.772484 containerd[1440]: time="2024-10-09T00:54:52.772342503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:54:53.406051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9de54be4254cc4a411b4dc96d9ffaae2dbf45cbdab853a2355d32501462ae225-rootfs.mount: Deactivated successfully. Oct 9 00:54:53.680887 kubelet[2512]: E1009 00:54:53.680788 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:53.683774 containerd[1440]: time="2024-10-09T00:54:53.683637603Z" level=info msg="CreateContainer within sandbox \"80f45bce2d3eb147301b557026a290ec8ff85ab12cae6840544d35db1a8e6d46\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 9 00:54:53.697449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3032369161.mount: Deactivated successfully. Oct 9 00:54:53.699165 containerd[1440]: time="2024-10-09T00:54:53.699133000Z" level=info msg="CreateContainer within sandbox \"80f45bce2d3eb147301b557026a290ec8ff85ab12cae6840544d35db1a8e6d46\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3adc9c79bad7c94ee07274a6a226553751eb140fbaa7007fe57960bc57d4104b\"" Oct 9 00:54:53.699664 containerd[1440]: time="2024-10-09T00:54:53.699630156Z" level=info msg="StartContainer for \"3adc9c79bad7c94ee07274a6a226553751eb140fbaa7007fe57960bc57d4104b\"" Oct 9 00:54:53.729458 systemd[1]: Started cri-containerd-3adc9c79bad7c94ee07274a6a226553751eb140fbaa7007fe57960bc57d4104b.scope - libcontainer container 3adc9c79bad7c94ee07274a6a226553751eb140fbaa7007fe57960bc57d4104b. Oct 9 00:54:53.752609 systemd[1]: cri-containerd-3adc9c79bad7c94ee07274a6a226553751eb140fbaa7007fe57960bc57d4104b.scope: Deactivated successfully. Oct 9 00:54:53.753042 containerd[1440]: time="2024-10-09T00:54:53.752994693Z" level=info msg="StartContainer for \"3adc9c79bad7c94ee07274a6a226553751eb140fbaa7007fe57960bc57d4104b\" returns successfully" Oct 9 00:54:53.771979 containerd[1440]: time="2024-10-09T00:54:53.771918743Z" level=info msg="shim disconnected" id=3adc9c79bad7c94ee07274a6a226553751eb140fbaa7007fe57960bc57d4104b namespace=k8s.io Oct 9 00:54:53.771979 containerd[1440]: time="2024-10-09T00:54:53.771968182Z" level=warning msg="cleaning up after shim disconnected" id=3adc9c79bad7c94ee07274a6a226553751eb140fbaa7007fe57960bc57d4104b namespace=k8s.io Oct 9 00:54:53.771979 containerd[1440]: time="2024-10-09T00:54:53.771975782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:54:54.406069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3adc9c79bad7c94ee07274a6a226553751eb140fbaa7007fe57960bc57d4104b-rootfs.mount: Deactivated successfully. Oct 9 00:54:54.683448 kubelet[2512]: E1009 00:54:54.683345 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:54.685963 containerd[1440]: time="2024-10-09T00:54:54.685910462Z" level=info msg="CreateContainer within sandbox \"80f45bce2d3eb147301b557026a290ec8ff85ab12cae6840544d35db1a8e6d46\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 9 00:54:54.699403 containerd[1440]: time="2024-10-09T00:54:54.699365369Z" level=info msg="CreateContainer within sandbox \"80f45bce2d3eb147301b557026a290ec8ff85ab12cae6840544d35db1a8e6d46\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"af1233da5ca0ed1f96f8d9f010ad96f34647c419e9e246e87285e078f52e22c0\"" Oct 9 00:54:54.700587 containerd[1440]: time="2024-10-09T00:54:54.700512001Z" level=info msg="StartContainer for \"af1233da5ca0ed1f96f8d9f010ad96f34647c419e9e246e87285e078f52e22c0\"" Oct 9 00:54:54.728491 systemd[1]: Started cri-containerd-af1233da5ca0ed1f96f8d9f010ad96f34647c419e9e246e87285e078f52e22c0.scope - libcontainer container af1233da5ca0ed1f96f8d9f010ad96f34647c419e9e246e87285e078f52e22c0. Oct 9 00:54:54.745302 systemd[1]: cri-containerd-af1233da5ca0ed1f96f8d9f010ad96f34647c419e9e246e87285e078f52e22c0.scope: Deactivated successfully. Oct 9 00:54:54.746171 containerd[1440]: time="2024-10-09T00:54:54.746136005Z" level=info msg="StartContainer for \"af1233da5ca0ed1f96f8d9f010ad96f34647c419e9e246e87285e078f52e22c0\" returns successfully" Oct 9 00:54:54.764387 containerd[1440]: time="2024-10-09T00:54:54.764330279Z" level=info msg="shim disconnected" id=af1233da5ca0ed1f96f8d9f010ad96f34647c419e9e246e87285e078f52e22c0 namespace=k8s.io Oct 9 00:54:54.764387 containerd[1440]: time="2024-10-09T00:54:54.764381519Z" level=warning msg="cleaning up after shim disconnected" id=af1233da5ca0ed1f96f8d9f010ad96f34647c419e9e246e87285e078f52e22c0 namespace=k8s.io Oct 9 00:54:54.764547 containerd[1440]: time="2024-10-09T00:54:54.764391798Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:54:55.406128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af1233da5ca0ed1f96f8d9f010ad96f34647c419e9e246e87285e078f52e22c0-rootfs.mount: Deactivated successfully. Oct 9 00:54:55.687676 kubelet[2512]: E1009 00:54:55.687538 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:55.694710 containerd[1440]: time="2024-10-09T00:54:55.694215951Z" level=info msg="CreateContainer within sandbox \"80f45bce2d3eb147301b557026a290ec8ff85ab12cae6840544d35db1a8e6d46\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 9 00:54:55.707781 containerd[1440]: time="2024-10-09T00:54:55.707735031Z" level=info msg="CreateContainer within sandbox \"80f45bce2d3eb147301b557026a290ec8ff85ab12cae6840544d35db1a8e6d46\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"57f989fc406007fe5f0f83ca4ef5b6269b53074f5f4eaf7f6681b3e7ebc0a0a2\"" Oct 9 00:54:55.709112 containerd[1440]: time="2024-10-09T00:54:55.708642185Z" level=info msg="StartContainer for \"57f989fc406007fe5f0f83ca4ef5b6269b53074f5f4eaf7f6681b3e7ebc0a0a2\"" Oct 9 00:54:55.732460 systemd[1]: Started cri-containerd-57f989fc406007fe5f0f83ca4ef5b6269b53074f5f4eaf7f6681b3e7ebc0a0a2.scope - libcontainer container 57f989fc406007fe5f0f83ca4ef5b6269b53074f5f4eaf7f6681b3e7ebc0a0a2. Oct 9 00:54:55.755214 containerd[1440]: time="2024-10-09T00:54:55.755174588Z" level=info msg="StartContainer for \"57f989fc406007fe5f0f83ca4ef5b6269b53074f5f4eaf7f6681b3e7ebc0a0a2\" returns successfully" Oct 9 00:54:56.005339 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Oct 9 00:54:56.691596 kubelet[2512]: E1009 00:54:56.691566 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:56.706809 kubelet[2512]: I1009 00:54:56.706587 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-prfht" podStartSLOduration=5.706573065 podStartE2EDuration="5.706573065s" podCreationTimestamp="2024-10-09 00:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:54:56.706091788 +0000 UTC m=+79.336127464" watchObservedRunningTime="2024-10-09 00:54:56.706573065 +0000 UTC m=+79.336608701" Oct 9 00:54:57.692985 kubelet[2512]: E1009 00:54:57.692930 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:57.739932 systemd[1]: run-containerd-runc-k8s.io-57f989fc406007fe5f0f83ca4ef5b6269b53074f5f4eaf7f6681b3e7ebc0a0a2-runc.FqoOBD.mount: Deactivated successfully. Oct 9 00:54:58.445630 kubelet[2512]: E1009 00:54:58.445543 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:58.694634 kubelet[2512]: E1009 00:54:58.694598 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:54:58.818360 systemd-networkd[1384]: lxc_health: Link UP Oct 9 00:54:58.830067 systemd-networkd[1384]: lxc_health: Gained carrier Oct 9 00:54:59.699524 kubelet[2512]: E1009 00:54:59.699478 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:00.130535 systemd-networkd[1384]: lxc_health: Gained IPv6LL Oct 9 00:55:00.447844 kubelet[2512]: E1009 00:55:00.447736 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:00.448328 kubelet[2512]: E1009 00:55:00.448287 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:00.701501 kubelet[2512]: E1009 00:55:00.701401 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:01.704631 kubelet[2512]: E1009 00:55:01.702599 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:55:01.988246 systemd[1]: run-containerd-runc-k8s.io-57f989fc406007fe5f0f83ca4ef5b6269b53074f5f4eaf7f6681b3e7ebc0a0a2-runc.BMYMgw.mount: Deactivated successfully. Oct 9 00:55:04.153109 sshd[4341]: pam_unix(sshd:session): session closed for user core Oct 9 00:55:04.156689 systemd[1]: sshd@24-10.0.0.92:22-10.0.0.1:59030.service: Deactivated successfully. Oct 9 00:55:04.158715 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 00:55:04.159668 systemd-logind[1429]: Session 25 logged out. Waiting for processes to exit. Oct 9 00:55:04.161560 systemd-logind[1429]: Removed session 25.