Oct 8 19:27:19.919223 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 8 19:27:19.919245 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Oct 8 18:22:02 -00 2024 Oct 8 19:27:19.919255 kernel: KASLR enabled Oct 8 19:27:19.919261 kernel: efi: EFI v2.7 by EDK II Oct 8 19:27:19.919267 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 8 19:27:19.919273 kernel: random: crng init done Oct 8 19:27:19.919280 kernel: ACPI: Early table checksum verification disabled Oct 8 19:27:19.919286 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 8 19:27:19.919292 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 8 19:27:19.919300 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:27:19.919306 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:27:19.919312 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:27:19.919318 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:27:19.919324 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:27:19.919332 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:27:19.919340 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:27:19.919346 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:27:19.919353 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:27:19.919359 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 8 19:27:19.919366 kernel: NUMA: Failed to initialise from firmware Oct 8 19:27:19.919372 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:27:19.919379 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Oct 8 19:27:19.919385 kernel: Zone ranges: Oct 8 19:27:19.919391 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:27:19.919398 kernel: DMA32 empty Oct 8 19:27:19.919405 kernel: Normal empty Oct 8 19:27:19.919411 kernel: Movable zone start for each node Oct 8 19:27:19.919418 kernel: Early memory node ranges Oct 8 19:27:19.919424 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 8 19:27:19.919431 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 8 19:27:19.919437 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 8 19:27:19.919443 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 8 19:27:19.919450 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 8 19:27:19.919456 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 8 19:27:19.919463 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 8 19:27:19.919469 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 8 19:27:19.919476 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 8 19:27:19.919483 kernel: psci: probing for conduit method from ACPI. Oct 8 19:27:19.919490 kernel: psci: PSCIv1.1 detected in firmware. Oct 8 19:27:19.919496 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 19:27:19.919505 kernel: psci: Trusted OS migration not required Oct 8 19:27:19.919512 kernel: psci: SMC Calling Convention v1.1 Oct 8 19:27:19.919524 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 8 19:27:19.919533 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 19:27:19.919540 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 19:27:19.919547 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 8 19:27:19.919554 kernel: Detected PIPT I-cache on CPU0 Oct 8 19:27:19.919561 kernel: CPU features: detected: GIC system register CPU interface Oct 8 19:27:19.919568 kernel: CPU features: detected: Hardware dirty bit management Oct 8 19:27:19.919574 kernel: CPU features: detected: Spectre-v4 Oct 8 19:27:19.919581 kernel: CPU features: detected: Spectre-BHB Oct 8 19:27:19.919588 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 8 19:27:19.919595 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 8 19:27:19.919603 kernel: CPU features: detected: ARM erratum 1418040 Oct 8 19:27:19.919610 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 8 19:27:19.919617 kernel: alternatives: applying boot alternatives Oct 8 19:27:19.919625 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:27:19.919632 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:27:19.919639 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:27:19.919649 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:27:19.919656 kernel: Fallback order for Node 0: 0 Oct 8 19:27:19.919663 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 8 19:27:19.919670 kernel: Policy zone: DMA Oct 8 19:27:19.919680 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:27:19.919690 kernel: software IO TLB: area num 4. Oct 8 19:27:19.919697 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 8 19:27:19.919705 kernel: Memory: 2386788K/2572288K available (10240K kernel code, 2184K rwdata, 8080K rodata, 39104K init, 897K bss, 185500K reserved, 0K cma-reserved) Oct 8 19:27:19.919712 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 8 19:27:19.919719 kernel: trace event string verifier disabled Oct 8 19:27:19.919725 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:27:19.919733 kernel: rcu: RCU event tracing is enabled. Oct 8 19:27:19.919740 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 8 19:27:19.919747 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:27:19.919754 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:27:19.919761 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:27:19.919768 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 8 19:27:19.919778 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 19:27:19.919785 kernel: GICv3: 256 SPIs implemented Oct 8 19:27:19.919800 kernel: GICv3: 0 Extended SPIs implemented Oct 8 19:27:19.919810 kernel: Root IRQ handler: gic_handle_irq Oct 8 19:27:19.919839 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 8 19:27:19.919849 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 8 19:27:19.919855 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 8 19:27:19.919862 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 19:27:19.919869 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 8 19:27:19.919876 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 8 19:27:19.919890 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 8 19:27:19.919900 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:27:19.919907 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:27:19.919914 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 8 19:27:19.919921 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 8 19:27:19.919928 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 8 19:27:19.919935 kernel: arm-pv: using stolen time PV Oct 8 19:27:19.919942 kernel: Console: colour dummy device 80x25 Oct 8 19:27:19.919949 kernel: ACPI: Core revision 20230628 Oct 8 19:27:19.919956 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 8 19:27:19.919962 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:27:19.919971 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 8 19:27:19.919977 kernel: SELinux: Initializing. Oct 8 19:27:19.919984 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:27:19.919992 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:27:19.919999 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:27:19.920006 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:27:19.920013 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:27:19.920020 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:27:19.920027 kernel: Platform MSI: ITS@0x8080000 domain created Oct 8 19:27:19.920035 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 8 19:27:19.920042 kernel: Remapping and enabling EFI services. Oct 8 19:27:19.920049 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:27:19.920056 kernel: Detected PIPT I-cache on CPU1 Oct 8 19:27:19.920063 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 8 19:27:19.920070 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 8 19:27:19.920076 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:27:19.920083 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 8 19:27:19.920090 kernel: Detected PIPT I-cache on CPU2 Oct 8 19:27:19.920097 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 8 19:27:19.920105 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 8 19:27:19.920112 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:27:19.920124 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 8 19:27:19.920132 kernel: Detected PIPT I-cache on CPU3 Oct 8 19:27:19.920139 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 8 19:27:19.920146 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 8 19:27:19.920154 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:27:19.920161 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 8 19:27:19.920168 kernel: smp: Brought up 1 node, 4 CPUs Oct 8 19:27:19.920176 kernel: SMP: Total of 4 processors activated. Oct 8 19:27:19.920183 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 19:27:19.920191 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 8 19:27:19.920198 kernel: CPU features: detected: Common not Private translations Oct 8 19:27:19.920205 kernel: CPU features: detected: CRC32 instructions Oct 8 19:27:19.920213 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 8 19:27:19.920220 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 8 19:27:19.920227 kernel: CPU features: detected: LSE atomic instructions Oct 8 19:27:19.920235 kernel: CPU features: detected: Privileged Access Never Oct 8 19:27:19.920242 kernel: CPU features: detected: RAS Extension Support Oct 8 19:27:19.920250 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 8 19:27:19.920257 kernel: CPU: All CPU(s) started at EL1 Oct 8 19:27:19.920264 kernel: alternatives: applying system-wide alternatives Oct 8 19:27:19.920271 kernel: devtmpfs: initialized Oct 8 19:27:19.920278 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:27:19.920286 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 8 19:27:19.920293 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:27:19.920301 kernel: SMBIOS 3.0.0 present. Oct 8 19:27:19.920309 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 8 19:27:19.920316 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:27:19.920323 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 19:27:19.920331 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 19:27:19.920338 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 19:27:19.920345 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:27:19.920352 kernel: audit: type=2000 audit(0.034:1): state=initialized audit_enabled=0 res=1 Oct 8 19:27:19.920359 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:27:19.920368 kernel: cpuidle: using governor menu Oct 8 19:27:19.920376 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 19:27:19.920383 kernel: ASID allocator initialised with 32768 entries Oct 8 19:27:19.920390 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:27:19.920397 kernel: Serial: AMBA PL011 UART driver Oct 8 19:27:19.920405 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 8 19:27:19.920412 kernel: Modules: 0 pages in range for non-PLT usage Oct 8 19:27:19.920419 kernel: Modules: 509104 pages in range for PLT usage Oct 8 19:27:19.920426 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:27:19.920435 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:27:19.920442 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 19:27:19.920449 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 19:27:19.920456 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:27:19.920464 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:27:19.920471 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 19:27:19.920478 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 19:27:19.920485 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:27:19.920493 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:27:19.920501 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:27:19.920508 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:27:19.920516 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:27:19.920523 kernel: ACPI: Interpreter enabled Oct 8 19:27:19.920530 kernel: ACPI: Using GIC for interrupt routing Oct 8 19:27:19.920537 kernel: ACPI: MCFG table detected, 1 entries Oct 8 19:27:19.920545 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 8 19:27:19.920552 kernel: printk: console [ttyAMA0] enabled Oct 8 19:27:19.920559 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:27:19.920691 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:27:19.920763 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 19:27:19.920856 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 19:27:19.920932 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 8 19:27:19.920997 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 8 19:27:19.921007 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 8 19:27:19.921015 kernel: PCI host bridge to bus 0000:00 Oct 8 19:27:19.921090 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 8 19:27:19.921164 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 19:27:19.921227 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 8 19:27:19.921286 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:27:19.921365 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 8 19:27:19.921440 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 19:27:19.921513 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 8 19:27:19.921601 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 8 19:27:19.921670 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:27:19.921739 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:27:19.921835 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 8 19:27:19.921914 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 8 19:27:19.921977 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 8 19:27:19.922037 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 19:27:19.922100 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 8 19:27:19.922110 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 19:27:19.922118 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 19:27:19.922125 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 19:27:19.922132 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 19:27:19.922140 kernel: iommu: Default domain type: Translated Oct 8 19:27:19.922147 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 19:27:19.922155 kernel: efivars: Registered efivars operations Oct 8 19:27:19.922164 kernel: vgaarb: loaded Oct 8 19:27:19.922171 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 19:27:19.922179 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:27:19.922186 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:27:19.922193 kernel: pnp: PnP ACPI init Oct 8 19:27:19.922270 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 8 19:27:19.922280 kernel: pnp: PnP ACPI: found 1 devices Oct 8 19:27:19.922288 kernel: NET: Registered PF_INET protocol family Oct 8 19:27:19.922298 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:27:19.922305 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:27:19.922313 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:27:19.922320 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:27:19.922328 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:27:19.922335 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:27:19.922343 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:27:19.922350 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:27:19.922357 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:27:19.922366 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:27:19.922373 kernel: kvm [1]: HYP mode not available Oct 8 19:27:19.922380 kernel: Initialise system trusted keyrings Oct 8 19:27:19.922387 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:27:19.922395 kernel: Key type asymmetric registered Oct 8 19:27:19.922402 kernel: Asymmetric key parser 'x509' registered Oct 8 19:27:19.922409 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 19:27:19.922416 kernel: io scheduler mq-deadline registered Oct 8 19:27:19.922423 kernel: io scheduler kyber registered Oct 8 19:27:19.922432 kernel: io scheduler bfq registered Oct 8 19:27:19.922439 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 19:27:19.922446 kernel: ACPI: button: Power Button [PWRB] Oct 8 19:27:19.922457 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 19:27:19.922527 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 8 19:27:19.922538 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:27:19.922545 kernel: thunder_xcv, ver 1.0 Oct 8 19:27:19.922556 kernel: thunder_bgx, ver 1.0 Oct 8 19:27:19.922564 kernel: nicpf, ver 1.0 Oct 8 19:27:19.922576 kernel: nicvf, ver 1.0 Oct 8 19:27:19.922662 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 19:27:19.922731 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T19:27:19 UTC (1728415639) Oct 8 19:27:19.922741 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 19:27:19.922749 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 8 19:27:19.922756 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 19:27:19.922764 kernel: watchdog: Hard watchdog permanently disabled Oct 8 19:27:19.922772 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:27:19.922781 kernel: Segment Routing with IPv6 Oct 8 19:27:19.922797 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:27:19.922805 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:27:19.922813 kernel: Key type dns_resolver registered Oct 8 19:27:19.922820 kernel: registered taskstats version 1 Oct 8 19:27:19.922827 kernel: Loading compiled-in X.509 certificates Oct 8 19:27:19.922835 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e5b54c43c129014ce5ace0e8cd7b641a0fcb136e' Oct 8 19:27:19.922842 kernel: Key type .fscrypt registered Oct 8 19:27:19.922849 kernel: Key type fscrypt-provisioning registered Oct 8 19:27:19.922858 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:27:19.922866 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:27:19.922873 kernel: ima: No architecture policies found Oct 8 19:27:19.922881 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 19:27:19.922894 kernel: clk: Disabling unused clocks Oct 8 19:27:19.922901 kernel: Freeing unused kernel memory: 39104K Oct 8 19:27:19.922909 kernel: Run /init as init process Oct 8 19:27:19.922916 kernel: with arguments: Oct 8 19:27:19.922923 kernel: /init Oct 8 19:27:19.922933 kernel: with environment: Oct 8 19:27:19.922940 kernel: HOME=/ Oct 8 19:27:19.922947 kernel: TERM=linux Oct 8 19:27:19.922954 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:27:19.922963 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:27:19.922972 systemd[1]: Detected virtualization kvm. Oct 8 19:27:19.922980 systemd[1]: Detected architecture arm64. Oct 8 19:27:19.922987 systemd[1]: Running in initrd. Oct 8 19:27:19.922996 systemd[1]: No hostname configured, using default hostname. Oct 8 19:27:19.923003 systemd[1]: Hostname set to . Oct 8 19:27:19.923011 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:27:19.923019 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:27:19.923027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:27:19.923034 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:27:19.923043 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:27:19.923051 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:27:19.923062 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:27:19.923070 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:27:19.923079 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:27:19.923087 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:27:19.923095 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:27:19.923103 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:27:19.923112 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:27:19.923126 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:27:19.923134 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:27:19.923141 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:27:19.923149 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:27:19.923157 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:27:19.923165 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:27:19.923173 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:27:19.923180 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:27:19.923189 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:27:19.923197 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:27:19.923206 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:27:19.923213 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:27:19.923221 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:27:19.923230 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:27:19.923238 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:27:19.923245 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:27:19.923253 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:27:19.923263 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:27:19.923271 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:27:19.923282 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:27:19.923290 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:27:19.923299 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:27:19.923309 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:27:19.923334 systemd-journald[237]: Collecting audit messages is disabled. Oct 8 19:27:19.923353 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:27:19.923363 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:27:19.923371 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:27:19.923378 kernel: Bridge firewalling registered Oct 8 19:27:19.923389 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:27:19.923398 systemd-journald[237]: Journal started Oct 8 19:27:19.923416 systemd-journald[237]: Runtime Journal (/run/log/journal/9caefcee020547b2a9a8f09bda0e7680) is 5.9M, max 47.3M, 41.4M free. Oct 8 19:27:19.905848 systemd-modules-load[238]: Inserted module 'overlay' Oct 8 19:27:19.923083 systemd-modules-load[238]: Inserted module 'br_netfilter' Oct 8 19:27:19.926439 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:27:19.927411 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:27:19.928369 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:27:19.932468 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:27:19.933947 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:27:19.946853 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:27:19.948022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:27:19.951038 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:27:19.964936 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:27:19.967031 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:27:19.976553 dracut-cmdline[276]: dracut-dracut-053 Oct 8 19:27:19.979032 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:27:19.998069 systemd-resolved[278]: Positive Trust Anchors: Oct 8 19:27:19.998084 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:27:19.998115 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:27:20.002713 systemd-resolved[278]: Defaulting to hostname 'linux'. Oct 8 19:27:20.003907 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:27:20.004986 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:27:20.048815 kernel: SCSI subsystem initialized Oct 8 19:27:20.052808 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:27:20.060815 kernel: iscsi: registered transport (tcp) Oct 8 19:27:20.073816 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:27:20.073831 kernel: QLogic iSCSI HBA Driver Oct 8 19:27:20.114321 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:27:20.128934 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:27:20.146079 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:27:20.146127 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:27:20.146138 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:27:20.194811 kernel: raid6: neonx8 gen() 15723 MB/s Oct 8 19:27:20.211802 kernel: raid6: neonx4 gen() 15659 MB/s Oct 8 19:27:20.228800 kernel: raid6: neonx2 gen() 13230 MB/s Oct 8 19:27:20.245801 kernel: raid6: neonx1 gen() 10464 MB/s Oct 8 19:27:20.262804 kernel: raid6: int64x8 gen() 6958 MB/s Oct 8 19:27:20.279807 kernel: raid6: int64x4 gen() 7330 MB/s Oct 8 19:27:20.296804 kernel: raid6: int64x2 gen() 6117 MB/s Oct 8 19:27:20.313801 kernel: raid6: int64x1 gen() 5049 MB/s Oct 8 19:27:20.313827 kernel: raid6: using algorithm neonx8 gen() 15723 MB/s Oct 8 19:27:20.330807 kernel: raid6: .... xor() 11892 MB/s, rmw enabled Oct 8 19:27:20.330836 kernel: raid6: using neon recovery algorithm Oct 8 19:27:20.335808 kernel: xor: measuring software checksum speed Oct 8 19:27:20.335827 kernel: 8regs : 19769 MB/sec Oct 8 19:27:20.337263 kernel: 32regs : 17209 MB/sec Oct 8 19:27:20.337276 kernel: arm64_neon : 26919 MB/sec Oct 8 19:27:20.337285 kernel: xor: using function: arm64_neon (26919 MB/sec) Oct 8 19:27:20.388807 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:27:20.399158 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:27:20.413952 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:27:20.426318 systemd-udevd[462]: Using default interface naming scheme 'v255'. Oct 8 19:27:20.429412 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:27:20.431860 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:27:20.447760 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Oct 8 19:27:20.476333 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:27:20.491955 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:27:20.531825 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:27:20.538957 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:27:20.554231 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:27:20.555489 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:27:20.557875 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:27:20.558638 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:27:20.565135 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:27:20.579111 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:27:20.587513 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 8 19:27:20.587715 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 8 19:27:20.590181 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:27:20.590290 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:27:20.595014 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:27:20.595036 kernel: GPT:9289727 != 19775487 Oct 8 19:27:20.595045 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:27:20.595061 kernel: GPT:9289727 != 19775487 Oct 8 19:27:20.595072 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:27:20.592597 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:27:20.597742 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:27:20.596091 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:27:20.596182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:27:20.598902 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:27:20.610981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:27:20.613955 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (523) Oct 8 19:27:20.619810 kernel: BTRFS: device fsid a2a78d47-736b-4018-a518-3cfb16920575 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (508) Oct 8 19:27:20.624890 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 8 19:27:20.625960 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:27:20.634226 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 8 19:27:20.638366 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:27:20.642050 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 8 19:27:20.642903 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 8 19:27:20.656990 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:27:20.658437 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:27:20.663596 disk-uuid[551]: Primary Header is updated. Oct 8 19:27:20.663596 disk-uuid[551]: Secondary Entries is updated. Oct 8 19:27:20.663596 disk-uuid[551]: Secondary Header is updated. Oct 8 19:27:20.665969 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:27:20.681536 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:27:21.678827 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 8 19:27:21.679261 disk-uuid[552]: The operation has completed successfully. Oct 8 19:27:21.696361 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:27:21.696451 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:27:21.723026 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:27:21.725614 sh[575]: Success Oct 8 19:27:21.740820 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 19:27:21.777114 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:27:21.778496 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:27:21.780814 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:27:21.788853 kernel: BTRFS info (device dm-0): first mount of filesystem a2a78d47-736b-4018-a518-3cfb16920575 Oct 8 19:27:21.788887 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:27:21.788904 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:27:21.790154 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:27:21.790803 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:27:21.793653 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:27:21.794736 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:27:21.805919 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:27:21.807130 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:27:21.813265 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:27:21.813301 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:27:21.813311 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:27:21.815831 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:27:21.823245 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:27:21.823907 kernel: BTRFS info (device vda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:27:21.829864 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:27:21.838940 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:27:21.895179 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:27:21.908985 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:27:21.930653 ignition[666]: Ignition 2.18.0 Oct 8 19:27:21.930663 ignition[666]: Stage: fetch-offline Oct 8 19:27:21.930697 ignition[666]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:27:21.930706 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:27:21.930800 ignition[666]: parsed url from cmdline: "" Oct 8 19:27:21.930805 ignition[666]: no config URL provided Oct 8 19:27:21.930810 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:27:21.930818 ignition[666]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:27:21.930842 ignition[666]: op(1): [started] loading QEMU firmware config module Oct 8 19:27:21.935830 systemd-networkd[765]: lo: Link UP Oct 8 19:27:21.930847 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 8 19:27:21.935834 systemd-networkd[765]: lo: Gained carrier Oct 8 19:27:21.940424 ignition[666]: op(1): [finished] loading QEMU firmware config module Oct 8 19:27:21.936477 systemd-networkd[765]: Enumeration completed Oct 8 19:27:21.937780 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:27:21.937782 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:27:21.937864 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:27:21.938898 systemd[1]: Reached target network.target - Network. Oct 8 19:27:21.939542 systemd-networkd[765]: eth0: Link UP Oct 8 19:27:21.939545 systemd-networkd[765]: eth0: Gained carrier Oct 8 19:27:21.939552 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:27:21.960825 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:27:21.984002 ignition[666]: parsing config with SHA512: 742f5f5c24ee8ae4d27709d9b8107e86323a7eb49e04775c8b71bd4738f8a06171851adb09702e803317984b211064f2077e35859c8e30a1e1dd998d64d2b919 Oct 8 19:27:21.987823 unknown[666]: fetched base config from "system" Oct 8 19:27:21.987833 unknown[666]: fetched user config from "qemu" Oct 8 19:27:21.989175 ignition[666]: fetch-offline: fetch-offline passed Oct 8 19:27:21.989243 ignition[666]: Ignition finished successfully Oct 8 19:27:21.992824 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:27:21.993781 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 8 19:27:21.999929 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:27:22.011751 ignition[772]: Ignition 2.18.0 Oct 8 19:27:22.011760 ignition[772]: Stage: kargs Oct 8 19:27:22.011925 ignition[772]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:27:22.011935 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:27:22.012736 ignition[772]: kargs: kargs passed Oct 8 19:27:22.012776 ignition[772]: Ignition finished successfully Oct 8 19:27:22.015493 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:27:22.028934 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:27:22.038018 ignition[781]: Ignition 2.18.0 Oct 8 19:27:22.038026 ignition[781]: Stage: disks Oct 8 19:27:22.038173 ignition[781]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:27:22.038182 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:27:22.039034 ignition[781]: disks: disks passed Oct 8 19:27:22.039078 ignition[781]: Ignition finished successfully Oct 8 19:27:22.040960 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:27:22.043050 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:27:22.043838 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:27:22.045383 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:27:22.046755 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:27:22.048030 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:27:22.059976 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:27:22.070847 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:27:22.074460 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:27:22.092920 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:27:22.132811 kernel: EXT4-fs (vda9): mounted filesystem fbf53fb2-c32f-44fa-a235-3100e56d8882 r/w with ordered data mode. Quota mode: none. Oct 8 19:27:22.133334 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:27:22.134304 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:27:22.151867 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:27:22.153237 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:27:22.154207 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:27:22.154272 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:27:22.154323 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:27:22.160031 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Oct 8 19:27:22.159924 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:27:22.162745 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:27:22.162760 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:27:22.162770 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:27:22.162263 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:27:22.165454 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:27:22.166986 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:27:22.208805 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:27:22.211869 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:27:22.214947 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:27:22.218279 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:27:22.280540 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:27:22.295917 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:27:22.297244 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:27:22.301814 kernel: BTRFS info (device vda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:27:22.317550 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:27:22.319190 ignition[914]: INFO : Ignition 2.18.0 Oct 8 19:27:22.319190 ignition[914]: INFO : Stage: mount Oct 8 19:27:22.319190 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:27:22.319190 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:27:22.322450 ignition[914]: INFO : mount: mount passed Oct 8 19:27:22.322450 ignition[914]: INFO : Ignition finished successfully Oct 8 19:27:22.320989 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:27:22.333925 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:27:22.788526 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:27:22.798028 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:27:22.803804 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) Oct 8 19:27:22.805451 kernel: BTRFS info (device vda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:27:22.805474 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:27:22.805974 kernel: BTRFS info (device vda6): using free space tree Oct 8 19:27:22.807810 kernel: BTRFS info (device vda6): auto enabling async discard Oct 8 19:27:22.808910 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:27:22.824972 ignition[945]: INFO : Ignition 2.18.0 Oct 8 19:27:22.824972 ignition[945]: INFO : Stage: files Oct 8 19:27:22.826143 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:27:22.826143 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:27:22.826143 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:27:22.828698 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:27:22.828698 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:27:22.828698 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:27:22.831692 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:27:22.831692 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:27:22.831692 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:27:22.831692 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 19:27:22.829119 unknown[945]: wrote ssh authorized keys file for user: core Oct 8 19:27:22.873119 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:27:23.019757 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:27:23.019757 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 19:27:23.019757 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 8 19:27:23.387187 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 8 19:27:23.583101 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 8 19:27:23.584362 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:27:23.585661 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:27:23.585661 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:27:23.585661 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:27:23.585661 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:27:23.585661 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:27:23.585661 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:27:23.585661 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:27:23.585661 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:27:23.585661 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:27:23.585661 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:27:23.585661 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:27:23.585661 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:27:23.585661 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Oct 8 19:27:23.631947 systemd-networkd[765]: eth0: Gained IPv6LL Oct 8 19:27:23.888731 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 8 19:27:24.457334 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:27:24.457334 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 8 19:27:24.460262 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:27:24.462262 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:27:24.462262 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 8 19:27:24.462262 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 8 19:27:24.462262 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:27:24.462262 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 8 19:27:24.462262 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 8 19:27:24.462262 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 8 19:27:24.480916 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:27:24.484489 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 8 19:27:24.486554 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 8 19:27:24.486554 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:27:24.486554 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:27:24.486554 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:27:24.486554 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:27:24.486554 ignition[945]: INFO : files: files passed Oct 8 19:27:24.486554 ignition[945]: INFO : Ignition finished successfully Oct 8 19:27:24.487623 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:27:24.504933 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:27:24.507941 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:27:24.510421 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:27:24.510503 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:27:24.514720 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Oct 8 19:27:24.517827 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:27:24.517827 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:27:24.521856 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:27:24.524225 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:27:24.525258 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:27:24.538942 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:27:24.557306 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:27:24.557398 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:27:24.558990 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:27:24.561051 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:27:24.562445 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:27:24.576943 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:27:24.589871 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:27:24.591815 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:27:24.602384 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:27:24.603314 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:27:24.604202 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:27:24.606412 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:27:24.606524 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:27:24.608272 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:27:24.609699 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:27:24.610855 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:27:24.612075 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:27:24.613433 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:27:24.614766 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:27:24.616077 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:27:24.617450 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:27:24.618787 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:27:24.620015 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:27:24.621067 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:27:24.621171 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:27:24.622784 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:27:24.624168 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:27:24.625519 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:27:24.630088 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:27:24.631001 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:27:24.631104 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:27:24.633059 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:27:24.633168 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:27:24.634567 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:27:24.635644 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:27:24.639029 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:27:24.639946 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:27:24.641420 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:27:24.642482 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:27:24.642562 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:27:24.643636 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:27:24.643708 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:27:24.644776 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:27:24.644895 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:27:24.646117 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:27:24.646207 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:27:24.659225 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:27:24.662206 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:27:24.663137 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:27:24.663270 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:27:24.664498 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:27:24.664586 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:27:24.672540 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:27:24.672621 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:27:24.683426 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:27:24.684856 ignition[1000]: INFO : Ignition 2.18.0 Oct 8 19:27:24.684856 ignition[1000]: INFO : Stage: umount Oct 8 19:27:24.684856 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:27:24.684856 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 8 19:27:24.684856 ignition[1000]: INFO : umount: umount passed Oct 8 19:27:24.684856 ignition[1000]: INFO : Ignition finished successfully Oct 8 19:27:24.686105 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:27:24.687821 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:27:24.690311 systemd[1]: Stopped target network.target - Network. Oct 8 19:27:24.691055 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:27:24.691133 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:27:24.693572 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:27:24.693618 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:27:24.694861 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:27:24.694911 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:27:24.696175 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:27:24.696216 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:27:24.697142 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:27:24.698420 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:27:24.707320 systemd-networkd[765]: eth0: DHCPv6 lease lost Oct 8 19:27:24.709931 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:27:24.710733 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:27:24.713019 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:27:24.713149 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:27:24.715981 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:27:24.716039 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:27:24.723919 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:27:24.724546 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:27:24.724595 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:27:24.726118 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:27:24.726157 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:27:24.727469 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:27:24.727510 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:27:24.729038 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:27:24.729079 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:27:24.730468 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:27:24.740237 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:27:24.741572 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:27:24.742683 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:27:24.742812 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:27:24.744440 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:27:24.744507 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:27:24.746096 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:27:24.746129 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:27:24.747340 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:27:24.747383 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:27:24.749404 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:27:24.749444 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:27:24.752418 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:27:24.752469 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:27:24.759068 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:27:24.759802 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:27:24.759853 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:27:24.761776 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 8 19:27:24.761832 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:27:24.763509 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:27:24.763551 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:27:24.765148 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:27:24.765188 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:27:24.766929 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:27:24.767012 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:27:24.768468 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:27:24.768538 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:27:24.770319 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:27:24.771127 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:27:24.771178 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:27:24.773111 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:27:24.782444 systemd[1]: Switching root. Oct 8 19:27:24.806680 systemd-journald[237]: Journal stopped Oct 8 19:27:25.500704 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Oct 8 19:27:25.500768 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:27:25.500786 kernel: SELinux: policy capability open_perms=1 Oct 8 19:27:25.500814 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:27:25.500824 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:27:25.500834 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:27:25.500848 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:27:25.500860 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:27:25.500879 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:27:25.500893 kernel: audit: type=1403 audit(1728415644.964:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:27:25.500904 systemd[1]: Successfully loaded SELinux policy in 30.988ms. Oct 8 19:27:25.500924 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.988ms. Oct 8 19:27:25.500936 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:27:25.500947 systemd[1]: Detected virtualization kvm. Oct 8 19:27:25.500957 systemd[1]: Detected architecture arm64. Oct 8 19:27:25.500969 systemd[1]: Detected first boot. Oct 8 19:27:25.500981 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:27:25.500992 zram_generator::config[1044]: No configuration found. Oct 8 19:27:25.501003 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:27:25.501014 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:27:25.501024 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:27:25.501035 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:27:25.501046 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:27:25.501059 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:27:25.501070 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:27:25.501080 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:27:25.501092 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:27:25.501102 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:27:25.501114 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:27:25.501125 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:27:25.501136 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:27:25.501147 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:27:25.501161 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:27:25.501171 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:27:25.501182 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:27:25.501194 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:27:25.501207 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 8 19:27:25.501217 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:27:25.501228 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:27:25.501239 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:27:25.501251 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:27:25.501262 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:27:25.501273 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:27:25.501283 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:27:25.501294 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:27:25.501304 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:27:25.501315 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:27:25.501325 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:27:25.501337 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:27:25.501349 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:27:25.501359 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:27:25.501370 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:27:25.501380 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:27:25.501390 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:27:25.501400 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:27:25.501411 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:27:25.501421 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:27:25.501433 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:27:25.501444 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:27:25.501454 systemd[1]: Reached target machines.target - Containers. Oct 8 19:27:25.501465 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:27:25.501476 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:27:25.501487 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:27:25.501497 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:27:25.501507 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:27:25.501517 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:27:25.501529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:27:25.501540 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:27:25.501550 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:27:25.501561 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:27:25.501572 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:27:25.501583 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:27:25.501597 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:27:25.501615 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:27:25.501627 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:27:25.501637 kernel: fuse: init (API version 7.39) Oct 8 19:27:25.501646 kernel: loop: module loaded Oct 8 19:27:25.501656 kernel: ACPI: bus type drm_connector registered Oct 8 19:27:25.501665 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:27:25.501676 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:27:25.501686 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:27:25.501697 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:27:25.501707 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:27:25.501720 systemd[1]: Stopped verity-setup.service. Oct 8 19:27:25.501730 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:27:25.501742 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:27:25.501753 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:27:25.501763 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:27:25.501775 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:27:25.501810 systemd-journald[1107]: Collecting audit messages is disabled. Oct 8 19:27:25.501835 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:27:25.501847 systemd-journald[1107]: Journal started Oct 8 19:27:25.501875 systemd-journald[1107]: Runtime Journal (/run/log/journal/9caefcee020547b2a9a8f09bda0e7680) is 5.9M, max 47.3M, 41.4M free. Oct 8 19:27:25.306576 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:27:25.325909 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 8 19:27:25.326302 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:27:25.504519 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:27:25.506305 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:27:25.507071 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:27:25.508295 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:27:25.508447 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:27:25.509765 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:27:25.509933 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:27:25.511094 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:27:25.511235 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:27:25.512381 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:27:25.512514 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:27:25.513780 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:27:25.513946 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:27:25.515060 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:27:25.515181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:27:25.517873 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:27:25.519079 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:27:25.520353 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:27:25.532294 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:27:25.540917 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:27:25.542784 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:27:25.543701 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:27:25.543735 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:27:25.545515 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:27:25.547532 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:27:25.549514 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:27:25.550447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:27:25.551980 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:27:25.553743 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:27:25.554752 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:27:25.558992 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:27:25.560893 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:27:25.563128 systemd-journald[1107]: Time spent on flushing to /var/log/journal/9caefcee020547b2a9a8f09bda0e7680 is 24.742ms for 857 entries. Oct 8 19:27:25.563128 systemd-journald[1107]: System Journal (/var/log/journal/9caefcee020547b2a9a8f09bda0e7680) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:27:25.596517 systemd-journald[1107]: Received client request to flush runtime journal. Oct 8 19:27:25.596614 kernel: loop0: detected capacity change from 0 to 113672 Oct 8 19:27:25.596629 kernel: block loop0: the capability attribute has been deprecated. Oct 8 19:27:25.566018 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:27:25.568017 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:27:25.573100 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:27:25.577231 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:27:25.578603 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:27:25.579683 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:27:25.581463 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:27:25.582818 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:27:25.586897 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:27:25.596630 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:27:25.603759 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:27:25.609077 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:27:25.607375 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:27:25.618598 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:27:25.622400 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Oct 8 19:27:25.622415 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Oct 8 19:27:25.624253 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:27:25.626502 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:27:25.631256 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:27:25.633811 kernel: loop1: detected capacity change from 0 to 194512 Oct 8 19:27:25.634055 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 19:27:25.652060 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:27:25.667541 kernel: loop2: detected capacity change from 0 to 59688 Oct 8 19:27:25.678884 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:27:25.692016 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:27:25.698873 kernel: loop3: detected capacity change from 0 to 113672 Oct 8 19:27:25.703854 kernel: loop4: detected capacity change from 0 to 194512 Oct 8 19:27:25.704618 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Oct 8 19:27:25.704637 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Oct 8 19:27:25.709832 kernel: loop5: detected capacity change from 0 to 59688 Oct 8 19:27:25.709857 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:27:25.713069 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 8 19:27:25.713443 (sd-merge)[1181]: Merged extensions into '/usr'. Oct 8 19:27:25.716842 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:27:25.716856 systemd[1]: Reloading... Oct 8 19:27:25.771930 zram_generator::config[1207]: No configuration found. Oct 8 19:27:25.818732 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:27:25.873132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:27:25.910558 systemd[1]: Reloading finished in 193 ms. Oct 8 19:27:25.935410 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:27:25.936637 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:27:25.942240 systemd[1]: Starting ensure-sysext.service... Oct 8 19:27:25.943889 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:27:25.955297 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:27:25.955313 systemd[1]: Reloading... Oct 8 19:27:25.967759 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:27:25.968053 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:27:25.968660 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:27:25.968906 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Oct 8 19:27:25.968958 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Oct 8 19:27:25.973214 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:27:25.973228 systemd-tmpfiles[1242]: Skipping /boot Oct 8 19:27:25.979705 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:27:25.979723 systemd-tmpfiles[1242]: Skipping /boot Oct 8 19:27:26.007820 zram_generator::config[1268]: No configuration found. Oct 8 19:27:26.094940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:27:26.133387 systemd[1]: Reloading finished in 177 ms. Oct 8 19:27:26.149831 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:27:26.167223 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:27:26.175342 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:27:26.177540 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:27:26.179532 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:27:26.183076 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:27:26.190104 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:27:26.194913 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:27:26.197550 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:27:26.203162 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:27:26.205018 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:27:26.209460 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:27:26.210362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:27:26.211091 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:27:26.214180 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:27:26.214290 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:27:26.215523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:27:26.215627 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:27:26.217084 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:27:26.217203 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:27:26.224239 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:27:26.225471 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:27:26.227674 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:27:26.231267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:27:26.232532 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:27:26.233988 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:27:26.242226 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:27:26.244183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:27:26.244342 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:27:26.245940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:27:26.246965 systemd-udevd[1309]: Using default interface naming scheme 'v255'. Oct 8 19:27:26.248833 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:27:26.251106 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:27:26.254387 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:27:26.254537 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:27:26.257810 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:27:26.265208 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:27:26.269516 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:27:26.279880 systemd[1]: Finished ensure-sysext.service. Oct 8 19:27:26.285834 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:27:26.294353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:27:26.297199 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:27:26.299533 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:27:26.301980 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:27:26.303304 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:27:26.309316 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:27:26.316043 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:27:26.316938 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:27:26.317365 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:27:26.318510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:27:26.318675 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:27:26.319958 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:27:26.320097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:27:26.322890 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:27:26.323044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:27:26.324322 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:27:26.324447 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:27:26.328423 augenrules[1359]: No rules Oct 8 19:27:26.332889 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:27:26.338829 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1344) Oct 8 19:27:26.340057 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:27:26.340131 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:27:26.348400 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 8 19:27:26.354846 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1353) Oct 8 19:27:26.420777 systemd-networkd[1365]: lo: Link UP Oct 8 19:27:26.421128 systemd-networkd[1365]: lo: Gained carrier Oct 8 19:27:26.421901 systemd-networkd[1365]: Enumeration completed Oct 8 19:27:26.422195 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:27:26.424430 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:27:26.424509 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:27:26.425171 systemd-networkd[1365]: eth0: Link UP Oct 8 19:27:26.425254 systemd-networkd[1365]: eth0: Gained carrier Oct 8 19:27:26.425309 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:27:26.435143 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:27:26.436030 systemd-resolved[1307]: Positive Trust Anchors: Oct 8 19:27:26.436048 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:27:26.436083 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:27:26.436096 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:27:26.437887 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:27:26.440888 systemd-networkd[1365]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:27:26.441483 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Oct 8 19:27:26.443141 systemd-resolved[1307]: Defaulting to hostname 'linux'. Oct 8 19:27:26.443623 systemd-timesyncd[1369]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 8 19:27:26.443732 systemd-timesyncd[1369]: Initial clock synchronization to Tue 2024-10-08 19:27:26.680175 UTC. Oct 8 19:27:26.450190 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 8 19:27:26.457966 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:27:26.459372 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:27:26.460671 systemd[1]: Reached target network.target - Network. Oct 8 19:27:26.463958 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:27:26.473253 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:27:26.505070 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:27:26.514136 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:27:26.516905 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:27:26.532900 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:27:26.549883 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:27:26.561876 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:27:26.563000 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:27:26.563765 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:27:26.564599 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:27:26.565541 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:27:26.566589 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:27:26.567519 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:27:26.568447 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:27:26.569363 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:27:26.569406 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:27:26.570060 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:27:26.571851 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:27:26.573957 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:27:26.581687 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:27:26.583601 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:27:26.584936 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:27:26.585773 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:27:26.586460 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:27:26.587167 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:27:26.587197 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:27:26.588066 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:27:26.589708 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:27:26.591051 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:27:26.593936 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:27:26.595619 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:27:26.596531 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:27:26.599974 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:27:26.607712 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:27:26.611751 jq[1413]: false Oct 8 19:27:26.611867 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:27:26.616136 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:27:26.623605 extend-filesystems[1414]: Found loop3 Oct 8 19:27:26.629347 extend-filesystems[1414]: Found loop4 Oct 8 19:27:26.629347 extend-filesystems[1414]: Found loop5 Oct 8 19:27:26.629347 extend-filesystems[1414]: Found vda Oct 8 19:27:26.629347 extend-filesystems[1414]: Found vda1 Oct 8 19:27:26.629347 extend-filesystems[1414]: Found vda2 Oct 8 19:27:26.629347 extend-filesystems[1414]: Found vda3 Oct 8 19:27:26.629347 extend-filesystems[1414]: Found usr Oct 8 19:27:26.629347 extend-filesystems[1414]: Found vda4 Oct 8 19:27:26.629347 extend-filesystems[1414]: Found vda6 Oct 8 19:27:26.629347 extend-filesystems[1414]: Found vda7 Oct 8 19:27:26.629347 extend-filesystems[1414]: Found vda9 Oct 8 19:27:26.629347 extend-filesystems[1414]: Checking size of /dev/vda9 Oct 8 19:27:26.627208 dbus-daemon[1412]: [system] SELinux support is enabled Oct 8 19:27:26.627988 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:27:26.650774 extend-filesystems[1414]: Resized partition /dev/vda9 Oct 8 19:27:26.656299 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 8 19:27:26.629712 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:27:26.656479 extend-filesystems[1434]: resize2fs 1.47.0 (5-Feb-2023) Oct 8 19:27:26.630152 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:27:26.657414 jq[1433]: true Oct 8 19:27:26.632085 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:27:26.636273 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:27:26.637613 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:27:26.640857 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:27:26.649696 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:27:26.649874 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:27:26.650157 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:27:26.650289 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:27:26.655184 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:27:26.655335 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:27:26.662804 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1348) Oct 8 19:27:26.675544 jq[1439]: true Oct 8 19:27:26.680048 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 8 19:27:26.688395 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:27:26.688435 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:27:26.689470 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:27:26.689487 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:27:26.694579 update_engine[1428]: I1008 19:27:26.691043 1428 main.cc:92] Flatcar Update Engine starting Oct 8 19:27:26.694125 (ntainerd)[1447]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:27:26.695952 extend-filesystems[1434]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 8 19:27:26.695952 extend-filesystems[1434]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:27:26.695952 extend-filesystems[1434]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 8 19:27:26.701349 extend-filesystems[1414]: Resized filesystem in /dev/vda9 Oct 8 19:27:26.698127 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:27:26.702114 update_engine[1428]: I1008 19:27:26.698102 1428 update_check_scheduler.cc:74] Next update check in 3m43s Oct 8 19:27:26.699276 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:27:26.699459 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:27:26.702411 tar[1437]: linux-arm64/helm Oct 8 19:27:26.703982 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:27:26.708182 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 19:27:26.708678 systemd-logind[1425]: New seat seat0. Oct 8 19:27:26.709354 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:27:26.726410 bash[1469]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:27:26.727838 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:27:26.730418 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 8 19:27:26.777226 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:27:26.893813 sshd_keygen[1435]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:27:26.894214 containerd[1447]: time="2024-10-08T19:27:26.894137480Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 8 19:27:26.915726 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:27:26.922879 containerd[1447]: time="2024-10-08T19:27:26.922662800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:27:26.922879 containerd[1447]: time="2024-10-08T19:27:26.922709840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:27:26.923979 containerd[1447]: time="2024-10-08T19:27:26.923945520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:27:26.923979 containerd[1447]: time="2024-10-08T19:27:26.923976680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:27:26.924197 containerd[1447]: time="2024-10-08T19:27:26.924179280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:27:26.924273 containerd[1447]: time="2024-10-08T19:27:26.924197320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:27:26.924273 containerd[1447]: time="2024-10-08T19:27:26.924266200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:27:26.924361 containerd[1447]: time="2024-10-08T19:27:26.924308000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:27:26.924361 containerd[1447]: time="2024-10-08T19:27:26.924325880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:27:26.924412 containerd[1447]: time="2024-10-08T19:27:26.924387360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:27:26.924588 containerd[1447]: time="2024-10-08T19:27:26.924569160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:27:26.924616 containerd[1447]: time="2024-10-08T19:27:26.924593240Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 8 19:27:26.924616 containerd[1447]: time="2024-10-08T19:27:26.924603960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:27:26.924722 containerd[1447]: time="2024-10-08T19:27:26.924692520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:27:26.924722 containerd[1447]: time="2024-10-08T19:27:26.924712080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:27:26.924781 containerd[1447]: time="2024-10-08T19:27:26.924766040Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 8 19:27:26.924844 containerd[1447]: time="2024-10-08T19:27:26.924781560Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:27:26.925145 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:27:26.928489 containerd[1447]: time="2024-10-08T19:27:26.928455720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:27:26.928489 containerd[1447]: time="2024-10-08T19:27:26.928487960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:27:26.928558 containerd[1447]: time="2024-10-08T19:27:26.928505920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:27:26.928558 containerd[1447]: time="2024-10-08T19:27:26.928537480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:27:26.928558 containerd[1447]: time="2024-10-08T19:27:26.928552920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:27:26.928632 containerd[1447]: time="2024-10-08T19:27:26.928564520Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:27:26.928632 containerd[1447]: time="2024-10-08T19:27:26.928576800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:27:26.928716 containerd[1447]: time="2024-10-08T19:27:26.928687960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:27:26.928753 containerd[1447]: time="2024-10-08T19:27:26.928714920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:27:26.928753 containerd[1447]: time="2024-10-08T19:27:26.928732360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:27:26.928753 containerd[1447]: time="2024-10-08T19:27:26.928745480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:27:26.928819 containerd[1447]: time="2024-10-08T19:27:26.928760600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:27:26.928819 containerd[1447]: time="2024-10-08T19:27:26.928777160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:27:26.929061 containerd[1447]: time="2024-10-08T19:27:26.929022040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:27:26.929061 containerd[1447]: time="2024-10-08T19:27:26.929052880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:27:26.929115 containerd[1447]: time="2024-10-08T19:27:26.929067440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:27:26.929115 containerd[1447]: time="2024-10-08T19:27:26.929081760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:27:26.929115 containerd[1447]: time="2024-10-08T19:27:26.929094160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:27:26.929115 containerd[1447]: time="2024-10-08T19:27:26.929105360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:27:26.929302 containerd[1447]: time="2024-10-08T19:27:26.929273520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:27:26.929615 containerd[1447]: time="2024-10-08T19:27:26.929591880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:27:26.929651 containerd[1447]: time="2024-10-08T19:27:26.929624280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.929651 containerd[1447]: time="2024-10-08T19:27:26.929643520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:27:26.929766 containerd[1447]: time="2024-10-08T19:27:26.929666520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:27:26.932950 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930481240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930512960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930527680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930540280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930553160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930567280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930579120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930590800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930604800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930750800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930767400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930780840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930815400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930831840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.933956 containerd[1447]: time="2024-10-08T19:27:26.930846600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.933105 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:27:26.934409 containerd[1447]: time="2024-10-08T19:27:26.930865040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.934409 containerd[1447]: time="2024-10-08T19:27:26.930878480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:27:26.934445 containerd[1447]: time="2024-10-08T19:27:26.931180800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:27:26.934445 containerd[1447]: time="2024-10-08T19:27:26.931235600Z" level=info msg="Connect containerd service" Oct 8 19:27:26.934445 containerd[1447]: time="2024-10-08T19:27:26.931266240Z" level=info msg="using legacy CRI server" Oct 8 19:27:26.934445 containerd[1447]: time="2024-10-08T19:27:26.931272960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:27:26.934445 containerd[1447]: time="2024-10-08T19:27:26.931441640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:27:26.934445 containerd[1447]: time="2024-10-08T19:27:26.932063080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:27:26.934445 containerd[1447]: time="2024-10-08T19:27:26.932112000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:27:26.934445 containerd[1447]: time="2024-10-08T19:27:26.932129040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:27:26.934445 containerd[1447]: time="2024-10-08T19:27:26.932139760Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:27:26.934445 containerd[1447]: time="2024-10-08T19:27:26.932151000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:27:26.934445 containerd[1447]: time="2024-10-08T19:27:26.932506000Z" level=info msg="Start subscribing containerd event" Oct 8 19:27:26.934445 containerd[1447]: time="2024-10-08T19:27:26.932629400Z" level=info msg="Start recovering state" Oct 8 19:27:26.934445 containerd[1447]: time="2024-10-08T19:27:26.932643480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:27:26.934445 containerd[1447]: time="2024-10-08T19:27:26.932842360Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:27:26.939081 containerd[1447]: time="2024-10-08T19:27:26.938078040Z" level=info msg="Start event monitor" Oct 8 19:27:26.939081 containerd[1447]: time="2024-10-08T19:27:26.938169320Z" level=info msg="Start snapshots syncer" Oct 8 19:27:26.939081 containerd[1447]: time="2024-10-08T19:27:26.938194440Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:27:26.939081 containerd[1447]: time="2024-10-08T19:27:26.938204280Z" level=info msg="Start streaming server" Oct 8 19:27:26.939081 containerd[1447]: time="2024-10-08T19:27:26.938351680Z" level=info msg="containerd successfully booted in 0.046483s" Oct 8 19:27:26.942074 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:27:26.943355 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:27:26.952414 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:27:26.962169 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:27:26.964353 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 8 19:27:26.965371 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:27:27.067848 tar[1437]: linux-arm64/LICENSE Oct 8 19:27:27.067848 tar[1437]: linux-arm64/README.md Oct 8 19:27:27.083911 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:27:27.797586 systemd-networkd[1365]: eth0: Gained IPv6LL Oct 8 19:27:27.803559 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:27:27.805064 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:27:27.818110 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 8 19:27:27.820347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:27:27.822213 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:27:27.838541 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 8 19:27:27.838814 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 8 19:27:27.840312 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:27:27.849927 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:27:28.305440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:27:28.306703 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:27:28.309418 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:27:28.310983 systemd[1]: Startup finished in 544ms (kernel) + 5.263s (initrd) + 3.381s (userspace) = 9.190s. Oct 8 19:27:28.805701 kubelet[1526]: E1008 19:27:28.805549 1526 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:27:28.808580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:27:28.808739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:27:32.520524 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:27:32.521841 systemd[1]: Started sshd@0-10.0.0.22:22-10.0.0.1:48508.service - OpenSSH per-connection server daemon (10.0.0.1:48508). Oct 8 19:27:32.610155 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 48508 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:27:32.611759 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:27:32.620340 systemd-logind[1425]: New session 1 of user core. Oct 8 19:27:32.621348 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:27:32.633024 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:27:32.642027 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:27:32.644179 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:27:32.650525 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:27:32.733298 systemd[1544]: Queued start job for default target default.target. Oct 8 19:27:32.745717 systemd[1544]: Created slice app.slice - User Application Slice. Oct 8 19:27:32.745750 systemd[1544]: Reached target paths.target - Paths. Oct 8 19:27:32.745762 systemd[1544]: Reached target timers.target - Timers. Oct 8 19:27:32.746939 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:27:32.756835 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:27:32.756894 systemd[1544]: Reached target sockets.target - Sockets. Oct 8 19:27:32.756905 systemd[1544]: Reached target basic.target - Basic System. Oct 8 19:27:32.756945 systemd[1544]: Reached target default.target - Main User Target. Oct 8 19:27:32.756970 systemd[1544]: Startup finished in 101ms. Oct 8 19:27:32.757069 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:27:32.758328 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:27:32.829255 systemd[1]: Started sshd@1-10.0.0.22:22-10.0.0.1:48522.service - OpenSSH per-connection server daemon (10.0.0.1:48522). Oct 8 19:27:32.862082 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 48522 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:27:32.863230 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:27:32.867209 systemd-logind[1425]: New session 2 of user core. Oct 8 19:27:32.874958 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:27:32.926654 sshd[1555]: pam_unix(sshd:session): session closed for user core Oct 8 19:27:32.940283 systemd[1]: sshd@1-10.0.0.22:22-10.0.0.1:48522.service: Deactivated successfully. Oct 8 19:27:32.942241 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:27:32.943902 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:27:32.955088 systemd[1]: Started sshd@2-10.0.0.22:22-10.0.0.1:48534.service - OpenSSH per-connection server daemon (10.0.0.1:48534). Oct 8 19:27:32.955828 systemd-logind[1425]: Removed session 2. Oct 8 19:27:32.984575 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 48534 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:27:32.985637 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:27:32.989011 systemd-logind[1425]: New session 3 of user core. Oct 8 19:27:32.995988 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:27:33.042932 sshd[1562]: pam_unix(sshd:session): session closed for user core Oct 8 19:27:33.051130 systemd[1]: sshd@2-10.0.0.22:22-10.0.0.1:48534.service: Deactivated successfully. Oct 8 19:27:33.052419 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:27:33.055005 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:27:33.063119 systemd[1]: Started sshd@3-10.0.0.22:22-10.0.0.1:48538.service - OpenSSH per-connection server daemon (10.0.0.1:48538). Oct 8 19:27:33.063866 systemd-logind[1425]: Removed session 3. Oct 8 19:27:33.092873 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 48538 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:27:33.093941 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:27:33.097378 systemd-logind[1425]: New session 4 of user core. Oct 8 19:27:33.111998 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:27:33.162987 sshd[1569]: pam_unix(sshd:session): session closed for user core Oct 8 19:27:33.171947 systemd[1]: sshd@3-10.0.0.22:22-10.0.0.1:48538.service: Deactivated successfully. Oct 8 19:27:33.173313 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:27:33.175968 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:27:33.187064 systemd[1]: Started sshd@4-10.0.0.22:22-10.0.0.1:48554.service - OpenSSH per-connection server daemon (10.0.0.1:48554). Oct 8 19:27:33.188040 systemd-logind[1425]: Removed session 4. Oct 8 19:27:33.216978 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 48554 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:27:33.218417 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:27:33.221955 systemd-logind[1425]: New session 5 of user core. Oct 8 19:27:33.233939 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:27:33.292001 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:27:33.292215 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:27:33.306574 sudo[1580]: pam_unix(sudo:session): session closed for user root Oct 8 19:27:33.308143 sshd[1577]: pam_unix(sshd:session): session closed for user core Oct 8 19:27:33.319110 systemd[1]: sshd@4-10.0.0.22:22-10.0.0.1:48554.service: Deactivated successfully. Oct 8 19:27:33.320450 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:27:33.321785 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:27:33.322949 systemd[1]: Started sshd@5-10.0.0.22:22-10.0.0.1:48560.service - OpenSSH per-connection server daemon (10.0.0.1:48560). Oct 8 19:27:33.323649 systemd-logind[1425]: Removed session 5. Oct 8 19:27:33.356668 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 48560 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:27:33.358059 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:27:33.361695 systemd-logind[1425]: New session 6 of user core. Oct 8 19:27:33.369926 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:27:33.420223 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:27:33.420466 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:27:33.423171 sudo[1589]: pam_unix(sudo:session): session closed for user root Oct 8 19:27:33.427302 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:27:33.427535 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:27:33.442102 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:27:33.443193 auditctl[1592]: No rules Oct 8 19:27:33.444009 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:27:33.444867 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:27:33.446419 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:27:33.468143 augenrules[1610]: No rules Oct 8 19:27:33.468717 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:27:33.469981 sudo[1588]: pam_unix(sudo:session): session closed for user root Oct 8 19:27:33.472424 sshd[1585]: pam_unix(sshd:session): session closed for user core Oct 8 19:27:33.481988 systemd[1]: sshd@5-10.0.0.22:22-10.0.0.1:48560.service: Deactivated successfully. Oct 8 19:27:33.483409 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:27:33.484628 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:27:33.498168 systemd[1]: Started sshd@6-10.0.0.22:22-10.0.0.1:48574.service - OpenSSH per-connection server daemon (10.0.0.1:48574). Oct 8 19:27:33.498924 systemd-logind[1425]: Removed session 6. Oct 8 19:27:33.527937 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 48574 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:27:33.529026 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:27:33.532629 systemd-logind[1425]: New session 7 of user core. Oct 8 19:27:33.541927 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:27:33.592122 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:27:33.592344 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:27:33.693046 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:27:33.693197 (dockerd)[1631]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:27:33.927523 dockerd[1631]: time="2024-10-08T19:27:33.927463665Z" level=info msg="Starting up" Oct 8 19:27:34.014492 dockerd[1631]: time="2024-10-08T19:27:34.014238404Z" level=info msg="Loading containers: start." Oct 8 19:27:34.091830 kernel: Initializing XFRM netlink socket Oct 8 19:27:34.155080 systemd-networkd[1365]: docker0: Link UP Oct 8 19:27:34.177025 dockerd[1631]: time="2024-10-08T19:27:34.176996275Z" level=info msg="Loading containers: done." Oct 8 19:27:34.232933 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck738645091-merged.mount: Deactivated successfully. Oct 8 19:27:34.233968 dockerd[1631]: time="2024-10-08T19:27:34.233253436Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:27:34.233968 dockerd[1631]: time="2024-10-08T19:27:34.233433139Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 8 19:27:34.233968 dockerd[1631]: time="2024-10-08T19:27:34.233536283Z" level=info msg="Daemon has completed initialization" Oct 8 19:27:34.257773 dockerd[1631]: time="2024-10-08T19:27:34.257710822Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:27:34.257963 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:27:34.877134 containerd[1447]: time="2024-10-08T19:27:34.877087395Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 19:27:35.611722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2732761750.mount: Deactivated successfully. Oct 8 19:27:37.166802 containerd[1447]: time="2024-10-08T19:27:37.166747650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:37.167278 containerd[1447]: time="2024-10-08T19:27:37.167244204Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=32286060" Oct 8 19:27:37.168809 containerd[1447]: time="2024-10-08T19:27:37.168772324Z" level=info msg="ImageCreate event name:\"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:37.173592 containerd[1447]: time="2024-10-08T19:27:37.173553441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:37.175533 containerd[1447]: time="2024-10-08T19:27:37.175502692Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"32282858\" in 2.29837224s" Oct 8 19:27:37.175582 containerd[1447]: time="2024-10-08T19:27:37.175541754Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\"" Oct 8 19:27:37.194343 containerd[1447]: time="2024-10-08T19:27:37.194307937Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 19:27:39.059297 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:27:39.068955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:27:39.155789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:27:39.160444 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:27:39.210938 containerd[1447]: time="2024-10-08T19:27:39.210875182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:39.211752 containerd[1447]: time="2024-10-08T19:27:39.211698940Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=29374206" Oct 8 19:27:39.212443 containerd[1447]: time="2024-10-08T19:27:39.212391884Z" level=info msg="ImageCreate event name:\"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:39.215226 containerd[1447]: time="2024-10-08T19:27:39.215185831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:39.216705 containerd[1447]: time="2024-10-08T19:27:39.216620730Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"30862018\" in 2.022273265s" Oct 8 19:27:39.216705 containerd[1447]: time="2024-10-08T19:27:39.216671148Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\"" Oct 8 19:27:39.221436 kubelet[1844]: E1008 19:27:39.221344 1844 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:27:39.224858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:27:39.224997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:27:39.235541 containerd[1447]: time="2024-10-08T19:27:39.235490486Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 19:27:40.403323 containerd[1447]: time="2024-10-08T19:27:40.403276764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:40.404758 containerd[1447]: time="2024-10-08T19:27:40.404707504Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=15751219" Oct 8 19:27:40.406085 containerd[1447]: time="2024-10-08T19:27:40.405890600Z" level=info msg="ImageCreate event name:\"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:40.408276 containerd[1447]: time="2024-10-08T19:27:40.408248146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:40.409392 containerd[1447]: time="2024-10-08T19:27:40.409359670Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"17239049\" in 1.17383008s" Oct 8 19:27:40.409449 containerd[1447]: time="2024-10-08T19:27:40.409396059Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\"" Oct 8 19:27:40.427691 containerd[1447]: time="2024-10-08T19:27:40.427666168Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 19:27:41.789509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2484810593.mount: Deactivated successfully. Oct 8 19:27:42.120983 containerd[1447]: time="2024-10-08T19:27:42.120858115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:42.121831 containerd[1447]: time="2024-10-08T19:27:42.121679942Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=25254040" Oct 8 19:27:42.122481 containerd[1447]: time="2024-10-08T19:27:42.122447553Z" level=info msg="ImageCreate event name:\"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:42.125918 containerd[1447]: time="2024-10-08T19:27:42.125717111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:42.126919 containerd[1447]: time="2024-10-08T19:27:42.126179710Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"25253057\" in 1.698483275s" Oct 8 19:27:42.126919 containerd[1447]: time="2024-10-08T19:27:42.126206858Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\"" Oct 8 19:27:42.148467 containerd[1447]: time="2024-10-08T19:27:42.147850741Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:27:42.745485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1598937206.mount: Deactivated successfully. Oct 8 19:27:43.381227 containerd[1447]: time="2024-10-08T19:27:43.381154999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:43.381597 containerd[1447]: time="2024-10-08T19:27:43.381560169Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 8 19:27:43.382493 containerd[1447]: time="2024-10-08T19:27:43.382451872Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:43.385654 containerd[1447]: time="2024-10-08T19:27:43.385583892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:43.388816 containerd[1447]: time="2024-10-08T19:27:43.387255108Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.239278836s" Oct 8 19:27:43.388816 containerd[1447]: time="2024-10-08T19:27:43.387307210Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 19:27:43.408857 containerd[1447]: time="2024-10-08T19:27:43.408814860Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:27:43.896689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount728927777.mount: Deactivated successfully. Oct 8 19:27:43.902248 containerd[1447]: time="2024-10-08T19:27:43.902194712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:43.903959 containerd[1447]: time="2024-10-08T19:27:43.903912932Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Oct 8 19:27:43.904944 containerd[1447]: time="2024-10-08T19:27:43.904906750Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:43.907050 containerd[1447]: time="2024-10-08T19:27:43.907004089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:43.908752 containerd[1447]: time="2024-10-08T19:27:43.908307465Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 499.452666ms" Oct 8 19:27:43.908752 containerd[1447]: time="2024-10-08T19:27:43.908351017Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 8 19:27:43.928851 containerd[1447]: time="2024-10-08T19:27:43.928818889Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 19:27:44.568787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262439677.mount: Deactivated successfully. Oct 8 19:27:47.052099 containerd[1447]: time="2024-10-08T19:27:47.052044311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:47.052629 containerd[1447]: time="2024-10-08T19:27:47.052593272Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Oct 8 19:27:47.053407 containerd[1447]: time="2024-10-08T19:27:47.053374709Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:47.056362 containerd[1447]: time="2024-10-08T19:27:47.056331670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:27:47.057718 containerd[1447]: time="2024-10-08T19:27:47.057597135Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.128614809s" Oct 8 19:27:47.057718 containerd[1447]: time="2024-10-08T19:27:47.057631886Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Oct 8 19:27:49.475382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:27:49.484975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:27:49.570508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:27:49.573862 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:27:49.611264 kubelet[2071]: E1008 19:27:49.611209 2071 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:27:49.614218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:27:49.614356 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:27:52.857525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:27:52.869998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:27:52.886482 systemd[1]: Reloading requested from client PID 2086 ('systemctl') (unit session-7.scope)... Oct 8 19:27:52.886497 systemd[1]: Reloading... Oct 8 19:27:52.953832 zram_generator::config[2129]: No configuration found. Oct 8 19:27:53.061515 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:27:53.115524 systemd[1]: Reloading finished in 228 ms. Oct 8 19:27:53.162058 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:27:53.166010 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:27:53.166227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:27:53.167655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:27:53.264573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:27:53.268971 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:27:53.309505 kubelet[2170]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:27:53.309505 kubelet[2170]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:27:53.309505 kubelet[2170]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:27:53.309859 kubelet[2170]: I1008 19:27:53.309539 2170 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:27:54.197505 kubelet[2170]: I1008 19:27:54.197007 2170 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:27:54.199012 kubelet[2170]: I1008 19:27:54.198938 2170 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:27:54.199413 kubelet[2170]: I1008 19:27:54.199386 2170 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:27:54.258029 kubelet[2170]: I1008 19:27:54.258000 2170 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:27:54.261794 kubelet[2170]: E1008 19:27:54.261702 2170 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:54.265375 kubelet[2170]: I1008 19:27:54.265347 2170 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:27:54.265595 kubelet[2170]: I1008 19:27:54.265579 2170 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:27:54.265774 kubelet[2170]: I1008 19:27:54.265754 2170 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:27:54.265774 kubelet[2170]: I1008 19:27:54.265776 2170 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:27:54.265882 kubelet[2170]: I1008 19:27:54.265785 2170 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:27:54.266931 kubelet[2170]: I1008 19:27:54.266903 2170 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:27:54.269023 kubelet[2170]: I1008 19:27:54.268987 2170 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:27:54.269023 kubelet[2170]: I1008 19:27:54.269021 2170 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:27:54.269076 kubelet[2170]: I1008 19:27:54.269041 2170 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:27:54.269076 kubelet[2170]: I1008 19:27:54.269054 2170 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:27:54.269530 kubelet[2170]: W1008 19:27:54.269429 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:54.269530 kubelet[2170]: E1008 19:27:54.269495 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:54.269631 kubelet[2170]: W1008 19:27:54.269567 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:54.269631 kubelet[2170]: E1008 19:27:54.269591 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:54.270492 kubelet[2170]: I1008 19:27:54.270473 2170 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:27:54.270946 kubelet[2170]: I1008 19:27:54.270928 2170 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:27:54.271562 kubelet[2170]: W1008 19:27:54.271529 2170 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:27:54.272567 kubelet[2170]: I1008 19:27:54.272435 2170 server.go:1256] "Started kubelet" Oct 8 19:27:54.276050 kubelet[2170]: I1008 19:27:54.275450 2170 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:27:54.276050 kubelet[2170]: I1008 19:27:54.275745 2170 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:27:54.276050 kubelet[2170]: I1008 19:27:54.275866 2170 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:27:54.277279 kubelet[2170]: I1008 19:27:54.277247 2170 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:27:54.278236 kubelet[2170]: I1008 19:27:54.278211 2170 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:27:54.279319 kubelet[2170]: E1008 19:27:54.279288 2170 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.22:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fc90e5f1396cc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-08 19:27:54.272410825 +0000 UTC m=+1.000127163,LastTimestamp:2024-10-08 19:27:54.272410825 +0000 UTC m=+1.000127163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 8 19:27:54.279803 kubelet[2170]: E1008 19:27:54.279758 2170 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:27:54.279850 kubelet[2170]: I1008 19:27:54.279811 2170 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:27:54.280054 kubelet[2170]: I1008 19:27:54.279925 2170 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:27:54.280054 kubelet[2170]: I1008 19:27:54.279987 2170 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:27:54.280366 kubelet[2170]: W1008 19:27:54.280324 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:54.280366 kubelet[2170]: E1008 19:27:54.280370 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:54.281692 kubelet[2170]: E1008 19:27:54.281175 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="200ms" Oct 8 19:27:54.281692 kubelet[2170]: I1008 19:27:54.281538 2170 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:27:54.281692 kubelet[2170]: I1008 19:27:54.281640 2170 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:27:54.282452 kubelet[2170]: E1008 19:27:54.282416 2170 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:27:54.283371 kubelet[2170]: I1008 19:27:54.283353 2170 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:27:54.291940 kubelet[2170]: I1008 19:27:54.291904 2170 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:27:54.293306 kubelet[2170]: I1008 19:27:54.292777 2170 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:27:54.293306 kubelet[2170]: I1008 19:27:54.292807 2170 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:27:54.293306 kubelet[2170]: I1008 19:27:54.292823 2170 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:27:54.293306 kubelet[2170]: E1008 19:27:54.292878 2170 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:27:54.296298 kubelet[2170]: W1008 19:27:54.296164 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:54.296401 kubelet[2170]: E1008 19:27:54.296382 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:54.296872 kubelet[2170]: I1008 19:27:54.296745 2170 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:27:54.296872 kubelet[2170]: I1008 19:27:54.296758 2170 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:27:54.296872 kubelet[2170]: I1008 19:27:54.296774 2170 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:27:54.358648 kubelet[2170]: I1008 19:27:54.358605 2170 policy_none.go:49] "None policy: Start" Oct 8 19:27:54.359716 kubelet[2170]: I1008 19:27:54.359699 2170 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:27:54.359771 kubelet[2170]: I1008 19:27:54.359744 2170 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:27:54.364643 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:27:54.381037 kubelet[2170]: I1008 19:27:54.381004 2170 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:27:54.381374 kubelet[2170]: E1008 19:27:54.381361 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Oct 8 19:27:54.383601 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:27:54.386109 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:27:54.393682 kubelet[2170]: E1008 19:27:54.393663 2170 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:27:54.396589 kubelet[2170]: I1008 19:27:54.396565 2170 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:27:54.396941 kubelet[2170]: I1008 19:27:54.396828 2170 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:27:54.397779 kubelet[2170]: E1008 19:27:54.397764 2170 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 8 19:27:54.482218 kubelet[2170]: E1008 19:27:54.482129 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="400ms" Oct 8 19:27:54.582384 kubelet[2170]: I1008 19:27:54.582343 2170 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:27:54.582682 kubelet[2170]: E1008 19:27:54.582658 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Oct 8 19:27:54.593897 kubelet[2170]: I1008 19:27:54.593856 2170 topology_manager.go:215] "Topology Admit Handler" podUID="e6e37df9a95fde4992a1c14b5ebeef90" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:27:54.594701 kubelet[2170]: I1008 19:27:54.594678 2170 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:27:54.595596 kubelet[2170]: I1008 19:27:54.595524 2170 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:27:54.600828 systemd[1]: Created slice kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice - libcontainer container kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice. Oct 8 19:27:54.621290 systemd[1]: Created slice kubepods-burstable-pode6e37df9a95fde4992a1c14b5ebeef90.slice - libcontainer container kubepods-burstable-pode6e37df9a95fde4992a1c14b5ebeef90.slice. Oct 8 19:27:54.635764 systemd[1]: Created slice kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice - libcontainer container kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice. Oct 8 19:27:54.682569 kubelet[2170]: I1008 19:27:54.682324 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6e37df9a95fde4992a1c14b5ebeef90-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6e37df9a95fde4992a1c14b5ebeef90\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:27:54.682569 kubelet[2170]: I1008 19:27:54.682368 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6e37df9a95fde4992a1c14b5ebeef90-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e6e37df9a95fde4992a1c14b5ebeef90\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:27:54.682569 kubelet[2170]: I1008 19:27:54.682387 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:27:54.682569 kubelet[2170]: I1008 19:27:54.682405 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:27:54.682569 kubelet[2170]: I1008 19:27:54.682427 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:27:54.682730 kubelet[2170]: I1008 19:27:54.682443 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6e37df9a95fde4992a1c14b5ebeef90-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6e37df9a95fde4992a1c14b5ebeef90\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:27:54.682730 kubelet[2170]: I1008 19:27:54.682460 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:27:54.682730 kubelet[2170]: I1008 19:27:54.682478 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:27:54.682730 kubelet[2170]: I1008 19:27:54.682501 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:27:54.883537 kubelet[2170]: E1008 19:27:54.883427 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="800ms" Oct 8 19:27:54.920738 kubelet[2170]: E1008 19:27:54.920693 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:27:54.921346 containerd[1447]: time="2024-10-08T19:27:54.921308217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 8 19:27:54.934636 kubelet[2170]: E1008 19:27:54.934598 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:27:54.935015 containerd[1447]: time="2024-10-08T19:27:54.934986284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e6e37df9a95fde4992a1c14b5ebeef90,Namespace:kube-system,Attempt:0,}" Oct 8 19:27:54.937387 kubelet[2170]: E1008 19:27:54.937294 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:27:54.937633 containerd[1447]: time="2024-10-08T19:27:54.937593939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 8 19:27:54.984154 kubelet[2170]: I1008 19:27:54.984128 2170 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:27:54.984445 kubelet[2170]: E1008 19:27:54.984420 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Oct 8 19:27:55.227216 kubelet[2170]: W1008 19:27:55.226833 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:55.227216 kubelet[2170]: E1008 19:27:55.226879 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:55.331607 kubelet[2170]: W1008 19:27:55.331543 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:55.331607 kubelet[2170]: E1008 19:27:55.331584 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:55.404112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184236508.mount: Deactivated successfully. Oct 8 19:27:55.414615 containerd[1447]: time="2024-10-08T19:27:55.414012579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:27:55.417900 containerd[1447]: time="2024-10-08T19:27:55.417853479Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 8 19:27:55.419749 containerd[1447]: time="2024-10-08T19:27:55.419716269Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:27:55.420668 containerd[1447]: time="2024-10-08T19:27:55.420603212Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:27:55.421945 containerd[1447]: time="2024-10-08T19:27:55.421906248Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:27:55.422554 containerd[1447]: time="2024-10-08T19:27:55.422527164Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:27:55.424248 containerd[1447]: time="2024-10-08T19:27:55.424211188Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:27:55.426855 containerd[1447]: time="2024-10-08T19:27:55.426169084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:27:55.428033 containerd[1447]: time="2024-10-08T19:27:55.427567708Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 506.161413ms" Oct 8 19:27:55.428397 containerd[1447]: time="2024-10-08T19:27:55.428356662Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.288993ms" Oct 8 19:27:55.432476 containerd[1447]: time="2024-10-08T19:27:55.432389857Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 494.721299ms" Oct 8 19:27:55.432616 kubelet[2170]: W1008 19:27:55.432563 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:55.432909 kubelet[2170]: E1008 19:27:55.432627 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:55.582968 containerd[1447]: time="2024-10-08T19:27:55.582763318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:27:55.582968 containerd[1447]: time="2024-10-08T19:27:55.582837370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:27:55.583142 containerd[1447]: time="2024-10-08T19:27:55.582897292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:27:55.583142 containerd[1447]: time="2024-10-08T19:27:55.582953972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:27:55.583142 containerd[1447]: time="2024-10-08T19:27:55.582977869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:27:55.583142 containerd[1447]: time="2024-10-08T19:27:55.583002486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:27:55.583142 containerd[1447]: time="2024-10-08T19:27:55.582859305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:27:55.583142 containerd[1447]: time="2024-10-08T19:27:55.582875637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:27:55.584970 containerd[1447]: time="2024-10-08T19:27:55.584855709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:27:55.584970 containerd[1447]: time="2024-10-08T19:27:55.584904583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:27:55.584970 containerd[1447]: time="2024-10-08T19:27:55.584922916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:27:55.585074 containerd[1447]: time="2024-10-08T19:27:55.584938327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:27:55.602959 systemd[1]: Started cri-containerd-0ec408655b129f4a8af8803fb5f2566978522be7b98f0bdc9edf76ac94563b9e.scope - libcontainer container 0ec408655b129f4a8af8803fb5f2566978522be7b98f0bdc9edf76ac94563b9e. Oct 8 19:27:55.604273 systemd[1]: Started cri-containerd-2810ed1d46069d1af250629dfe5914609e5ce701f98f5f6bbcb2f42e364020c6.scope - libcontainer container 2810ed1d46069d1af250629dfe5914609e5ce701f98f5f6bbcb2f42e364020c6. Oct 8 19:27:55.605343 systemd[1]: Started cri-containerd-efab509da59bd3ebdf2a8536bf8af4d300e175494157c3128cf1fc3cf0f58001.scope - libcontainer container efab509da59bd3ebdf2a8536bf8af4d300e175494157c3128cf1fc3cf0f58001. Oct 8 19:27:55.632904 kubelet[2170]: W1008 19:27:55.632825 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:55.632904 kubelet[2170]: E1008 19:27:55.632879 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Oct 8 19:27:55.635602 containerd[1447]: time="2024-10-08T19:27:55.635567875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ec408655b129f4a8af8803fb5f2566978522be7b98f0bdc9edf76ac94563b9e\"" Oct 8 19:27:55.636326 containerd[1447]: time="2024-10-08T19:27:55.636299790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2810ed1d46069d1af250629dfe5914609e5ce701f98f5f6bbcb2f42e364020c6\"" Oct 8 19:27:55.637977 kubelet[2170]: E1008 19:27:55.637921 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:27:55.637977 kubelet[2170]: E1008 19:27:55.637946 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:27:55.640444 containerd[1447]: time="2024-10-08T19:27:55.640248085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e6e37df9a95fde4992a1c14b5ebeef90,Namespace:kube-system,Attempt:0,} returns sandbox id \"efab509da59bd3ebdf2a8536bf8af4d300e175494157c3128cf1fc3cf0f58001\"" Oct 8 19:27:55.640836 kubelet[2170]: E1008 19:27:55.640707 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:27:55.642625 containerd[1447]: time="2024-10-08T19:27:55.642505912Z" level=info msg="CreateContainer within sandbox \"0ec408655b129f4a8af8803fb5f2566978522be7b98f0bdc9edf76ac94563b9e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:27:55.644231 containerd[1447]: time="2024-10-08T19:27:55.643015431Z" level=info msg="CreateContainer within sandbox \"efab509da59bd3ebdf2a8536bf8af4d300e175494157c3128cf1fc3cf0f58001\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:27:55.644467 containerd[1447]: time="2024-10-08T19:27:55.643239988Z" level=info msg="CreateContainer within sandbox \"2810ed1d46069d1af250629dfe5914609e5ce701f98f5f6bbcb2f42e364020c6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:27:55.663013 containerd[1447]: time="2024-10-08T19:27:55.662923224Z" level=info msg="CreateContainer within sandbox \"2810ed1d46069d1af250629dfe5914609e5ce701f98f5f6bbcb2f42e364020c6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"43e1442b29c41723c0cb936378d04debabc6ae5142336eed7aeca33c1416b13e\"" Oct 8 19:27:55.663593 containerd[1447]: time="2024-10-08T19:27:55.663567877Z" level=info msg="StartContainer for \"43e1442b29c41723c0cb936378d04debabc6ae5142336eed7aeca33c1416b13e\"" Oct 8 19:27:55.663890 containerd[1447]: time="2024-10-08T19:27:55.663860683Z" level=info msg="CreateContainer within sandbox \"0ec408655b129f4a8af8803fb5f2566978522be7b98f0bdc9edf76ac94563b9e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"de3dccd47beb80832dbeeae959f8aedd972f91e86effae867930b4e080fcba5f\"" Oct 8 19:27:55.664206 containerd[1447]: time="2024-10-08T19:27:55.664159293Z" level=info msg="StartContainer for \"de3dccd47beb80832dbeeae959f8aedd972f91e86effae867930b4e080fcba5f\"" Oct 8 19:27:55.666913 containerd[1447]: time="2024-10-08T19:27:55.666837976Z" level=info msg="CreateContainer within sandbox \"efab509da59bd3ebdf2a8536bf8af4d300e175494157c3128cf1fc3cf0f58001\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c7cb81348bcd4602175ad7f9ff4085d206f25eb696345ab901295d3dadd37c8d\"" Oct 8 19:27:55.667831 containerd[1447]: time="2024-10-08T19:27:55.667237977Z" level=info msg="StartContainer for \"c7cb81348bcd4602175ad7f9ff4085d206f25eb696345ab901295d3dadd37c8d\"" Oct 8 19:27:55.684488 kubelet[2170]: E1008 19:27:55.684450 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="1.6s" Oct 8 19:27:55.692007 systemd[1]: Started cri-containerd-43e1442b29c41723c0cb936378d04debabc6ae5142336eed7aeca33c1416b13e.scope - libcontainer container 43e1442b29c41723c0cb936378d04debabc6ae5142336eed7aeca33c1416b13e. Oct 8 19:27:55.692952 systemd[1]: Started cri-containerd-de3dccd47beb80832dbeeae959f8aedd972f91e86effae867930b4e080fcba5f.scope - libcontainer container de3dccd47beb80832dbeeae959f8aedd972f91e86effae867930b4e080fcba5f. Oct 8 19:27:55.695463 systemd[1]: Started cri-containerd-c7cb81348bcd4602175ad7f9ff4085d206f25eb696345ab901295d3dadd37c8d.scope - libcontainer container c7cb81348bcd4602175ad7f9ff4085d206f25eb696345ab901295d3dadd37c8d. Oct 8 19:27:55.754923 containerd[1447]: time="2024-10-08T19:27:55.754845038Z" level=info msg="StartContainer for \"de3dccd47beb80832dbeeae959f8aedd972f91e86effae867930b4e080fcba5f\" returns successfully" Oct 8 19:27:55.755023 containerd[1447]: time="2024-10-08T19:27:55.754865212Z" level=info msg="StartContainer for \"c7cb81348bcd4602175ad7f9ff4085d206f25eb696345ab901295d3dadd37c8d\" returns successfully" Oct 8 19:27:55.755023 containerd[1447]: time="2024-10-08T19:27:55.754871657Z" level=info msg="StartContainer for \"43e1442b29c41723c0cb936378d04debabc6ae5142336eed7aeca33c1416b13e\" returns successfully" Oct 8 19:27:55.788618 kubelet[2170]: I1008 19:27:55.785748 2170 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:27:55.788618 kubelet[2170]: E1008 19:27:55.786037 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Oct 8 19:27:56.303449 kubelet[2170]: E1008 19:27:56.303272 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:27:56.305528 kubelet[2170]: E1008 19:27:56.305462 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:27:56.307018 kubelet[2170]: E1008 19:27:56.306977 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:27:57.309154 kubelet[2170]: E1008 19:27:57.309082 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:27:57.387022 kubelet[2170]: I1008 19:27:57.386989 2170 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:27:58.409611 kubelet[2170]: E1008 19:27:58.409567 2170 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 8 19:27:58.500716 kubelet[2170]: I1008 19:27:58.500671 2170 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:27:59.055052 kubelet[2170]: E1008 19:27:59.055002 2170 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 8 19:27:59.055472 kubelet[2170]: E1008 19:27:59.055447 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:27:59.202986 kubelet[2170]: E1008 19:27:59.202940 2170 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 8 19:27:59.203371 kubelet[2170]: E1008 19:27:59.203345 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:27:59.272985 kubelet[2170]: I1008 19:27:59.272950 2170 apiserver.go:52] "Watching apiserver" Oct 8 19:27:59.280295 kubelet[2170]: I1008 19:27:59.280257 2170 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:28:00.872528 systemd[1]: Reloading requested from client PID 2453 ('systemctl') (unit session-7.scope)... Oct 8 19:28:00.872550 systemd[1]: Reloading... Oct 8 19:28:00.940870 zram_generator::config[2493]: No configuration found. Oct 8 19:28:01.021010 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:28:01.086202 systemd[1]: Reloading finished in 213 ms. Oct 8 19:28:01.115348 kubelet[2170]: I1008 19:28:01.115314 2170 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:28:01.115462 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:28:01.130671 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:28:01.130940 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:28:01.131062 systemd[1]: kubelet.service: Consumed 1.428s CPU time, 115.5M memory peak, 0B memory swap peak. Oct 8 19:28:01.140185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:28:01.228360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:28:01.232433 (kubelet)[2532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:28:01.273535 kubelet[2532]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:28:01.273535 kubelet[2532]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:28:01.273535 kubelet[2532]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:28:01.274063 kubelet[2532]: I1008 19:28:01.273578 2532 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:28:01.277779 kubelet[2532]: I1008 19:28:01.277609 2532 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:28:01.277779 kubelet[2532]: I1008 19:28:01.277632 2532 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:28:01.277925 kubelet[2532]: I1008 19:28:01.277832 2532 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:28:01.279412 kubelet[2532]: I1008 19:28:01.279391 2532 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:28:01.281557 kubelet[2532]: I1008 19:28:01.281347 2532 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:28:01.292547 kubelet[2532]: I1008 19:28:01.292502 2532 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:28:01.292706 kubelet[2532]: I1008 19:28:01.292699 2532 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:28:01.292899 kubelet[2532]: I1008 19:28:01.292878 2532 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:28:01.292994 kubelet[2532]: I1008 19:28:01.292919 2532 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:28:01.292994 kubelet[2532]: I1008 19:28:01.292931 2532 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:28:01.292994 kubelet[2532]: I1008 19:28:01.292969 2532 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:28:01.293339 kubelet[2532]: I1008 19:28:01.293322 2532 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:28:01.293382 kubelet[2532]: I1008 19:28:01.293346 2532 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:28:01.294140 kubelet[2532]: I1008 19:28:01.293650 2532 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:28:01.294140 kubelet[2532]: I1008 19:28:01.293684 2532 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:28:01.295946 kubelet[2532]: I1008 19:28:01.294342 2532 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:28:01.295946 kubelet[2532]: I1008 19:28:01.294533 2532 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:28:01.295946 kubelet[2532]: I1008 19:28:01.294898 2532 server.go:1256] "Started kubelet" Oct 8 19:28:01.295946 kubelet[2532]: I1008 19:28:01.295783 2532 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:28:01.300802 kubelet[2532]: I1008 19:28:01.296778 2532 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:28:01.300802 kubelet[2532]: I1008 19:28:01.297578 2532 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:28:01.300802 kubelet[2532]: I1008 19:28:01.298480 2532 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:28:01.300802 kubelet[2532]: I1008 19:28:01.298624 2532 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:28:01.300802 kubelet[2532]: I1008 19:28:01.299130 2532 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:28:01.300802 kubelet[2532]: I1008 19:28:01.299211 2532 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:28:01.300802 kubelet[2532]: I1008 19:28:01.299321 2532 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:28:01.300802 kubelet[2532]: E1008 19:28:01.299849 2532 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 8 19:28:01.301545 kubelet[2532]: E1008 19:28:01.301517 2532 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:28:01.302852 kubelet[2532]: I1008 19:28:01.301646 2532 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:28:01.302852 kubelet[2532]: I1008 19:28:01.301730 2532 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:28:01.303051 sudo[2548]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 8 19:28:01.303285 sudo[2548]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Oct 8 19:28:01.313716 kubelet[2532]: I1008 19:28:01.313679 2532 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:28:01.326237 kubelet[2532]: I1008 19:28:01.324192 2532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:28:01.327614 kubelet[2532]: I1008 19:28:01.326812 2532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:28:01.327614 kubelet[2532]: I1008 19:28:01.326836 2532 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:28:01.327614 kubelet[2532]: I1008 19:28:01.326852 2532 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:28:01.327614 kubelet[2532]: E1008 19:28:01.326926 2532 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:28:01.360962 kubelet[2532]: I1008 19:28:01.360862 2532 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:28:01.360962 kubelet[2532]: I1008 19:28:01.360882 2532 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:28:01.360962 kubelet[2532]: I1008 19:28:01.360900 2532 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:28:01.361133 kubelet[2532]: I1008 19:28:01.361076 2532 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:28:01.361133 kubelet[2532]: I1008 19:28:01.361096 2532 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:28:01.361133 kubelet[2532]: I1008 19:28:01.361102 2532 policy_none.go:49] "None policy: Start" Oct 8 19:28:01.362237 kubelet[2532]: I1008 19:28:01.361887 2532 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:28:01.362237 kubelet[2532]: I1008 19:28:01.361998 2532 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:28:01.362237 kubelet[2532]: I1008 19:28:01.362151 2532 state_mem.go:75] "Updated machine memory state" Oct 8 19:28:01.366530 kubelet[2532]: I1008 19:28:01.366510 2532 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:28:01.366752 kubelet[2532]: I1008 19:28:01.366737 2532 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:28:01.404048 kubelet[2532]: I1008 19:28:01.403947 2532 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 8 19:28:01.410298 kubelet[2532]: I1008 19:28:01.410265 2532 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 8 19:28:01.410428 kubelet[2532]: I1008 19:28:01.410338 2532 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 8 19:28:01.427768 kubelet[2532]: I1008 19:28:01.427716 2532 topology_manager.go:215] "Topology Admit Handler" podUID="e6e37df9a95fde4992a1c14b5ebeef90" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 8 19:28:01.427883 kubelet[2532]: I1008 19:28:01.427848 2532 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 8 19:28:01.428060 kubelet[2532]: I1008 19:28:01.427911 2532 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 8 19:28:01.600474 kubelet[2532]: I1008 19:28:01.600381 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6e37df9a95fde4992a1c14b5ebeef90-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6e37df9a95fde4992a1c14b5ebeef90\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:28:01.601097 kubelet[2532]: I1008 19:28:01.600672 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 8 19:28:01.601097 kubelet[2532]: I1008 19:28:01.600712 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6e37df9a95fde4992a1c14b5ebeef90-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e6e37df9a95fde4992a1c14b5ebeef90\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:28:01.601097 kubelet[2532]: I1008 19:28:01.600735 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6e37df9a95fde4992a1c14b5ebeef90-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e6e37df9a95fde4992a1c14b5ebeef90\") " pod="kube-system/kube-apiserver-localhost" Oct 8 19:28:01.601097 kubelet[2532]: I1008 19:28:01.600771 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:28:01.601097 kubelet[2532]: I1008 19:28:01.600822 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:28:01.601271 kubelet[2532]: I1008 19:28:01.600845 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:28:01.601271 kubelet[2532]: I1008 19:28:01.600871 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:28:01.601271 kubelet[2532]: I1008 19:28:01.600894 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 8 19:28:01.734573 kubelet[2532]: E1008 19:28:01.734059 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:01.734573 kubelet[2532]: E1008 19:28:01.734266 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:01.734759 kubelet[2532]: E1008 19:28:01.734727 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:01.746842 sudo[2548]: pam_unix(sudo:session): session closed for user root Oct 8 19:28:02.295065 kubelet[2532]: I1008 19:28:02.295016 2532 apiserver.go:52] "Watching apiserver" Oct 8 19:28:02.299768 kubelet[2532]: I1008 19:28:02.299731 2532 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:28:02.344961 kubelet[2532]: E1008 19:28:02.344817 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:02.344961 kubelet[2532]: E1008 19:28:02.344845 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:02.391220 kubelet[2532]: E1008 19:28:02.391067 2532 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 8 19:28:02.392127 kubelet[2532]: E1008 19:28:02.391532 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:02.425583 kubelet[2532]: I1008 19:28:02.425540 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4254936599999999 podStartE2EDuration="1.42549366s" podCreationTimestamp="2024-10-08 19:28:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:28:02.391343471 +0000 UTC m=+1.155576232" watchObservedRunningTime="2024-10-08 19:28:02.42549366 +0000 UTC m=+1.189726461" Oct 8 19:28:02.434450 kubelet[2532]: I1008 19:28:02.434385 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.434345232 podStartE2EDuration="1.434345232s" podCreationTimestamp="2024-10-08 19:28:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:28:02.425629114 +0000 UTC m=+1.189861875" watchObservedRunningTime="2024-10-08 19:28:02.434345232 +0000 UTC m=+1.198578033" Oct 8 19:28:02.442185 kubelet[2532]: I1008 19:28:02.442147 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.44210725 podStartE2EDuration="1.44210725s" podCreationTimestamp="2024-10-08 19:28:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:28:02.43448937 +0000 UTC m=+1.198722131" watchObservedRunningTime="2024-10-08 19:28:02.44210725 +0000 UTC m=+1.206340051" Oct 8 19:28:03.345978 kubelet[2532]: E1008 19:28:03.345952 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:03.760499 sudo[1621]: pam_unix(sudo:session): session closed for user root Oct 8 19:28:03.761993 sshd[1618]: pam_unix(sshd:session): session closed for user core Oct 8 19:28:03.765607 systemd[1]: sshd@6-10.0.0.22:22-10.0.0.1:48574.service: Deactivated successfully. Oct 8 19:28:03.767698 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:28:03.768029 systemd[1]: session-7.scope: Consumed 8.362s CPU time, 133.5M memory peak, 0B memory swap peak. Oct 8 19:28:03.768684 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:28:03.770029 systemd-logind[1425]: Removed session 7. Oct 8 19:28:03.770102 kubelet[2532]: E1008 19:28:03.770037 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:08.900820 kubelet[2532]: E1008 19:28:08.900718 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:09.354585 kubelet[2532]: E1008 19:28:09.354479 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:10.828960 kubelet[2532]: E1008 19:28:10.828878 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:11.359494 kubelet[2532]: E1008 19:28:11.359451 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:12.123651 update_engine[1428]: I1008 19:28:12.123483 1428 update_attempter.cc:509] Updating boot flags... Oct 8 19:28:12.168971 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2615) Oct 8 19:28:12.205927 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2616) Oct 8 19:28:12.231960 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2619) Oct 8 19:28:13.776780 kubelet[2532]: E1008 19:28:13.776701 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:16.970023 kubelet[2532]: I1008 19:28:16.969625 2532 topology_manager.go:215] "Topology Admit Handler" podUID="094b44ed-5840-45e3-89b2-56cb08d76998" podNamespace="kube-system" podName="kube-proxy-twffk" Oct 8 19:28:16.978439 systemd[1]: Created slice kubepods-besteffort-pod094b44ed_5840_45e3_89b2_56cb08d76998.slice - libcontainer container kubepods-besteffort-pod094b44ed_5840_45e3_89b2_56cb08d76998.slice. Oct 8 19:28:16.983377 kubelet[2532]: I1008 19:28:16.983327 2532 topology_manager.go:215] "Topology Admit Handler" podUID="1c80d9b7-0d35-4215-a6dc-bf743dd049e8" podNamespace="kube-system" podName="cilium-x6wp2" Oct 8 19:28:16.991537 systemd[1]: Created slice kubepods-burstable-pod1c80d9b7_0d35_4215_a6dc_bf743dd049e8.slice - libcontainer container kubepods-burstable-pod1c80d9b7_0d35_4215_a6dc_bf743dd049e8.slice. Oct 8 19:28:17.014755 kubelet[2532]: I1008 19:28:17.014704 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-lib-modules\") pod \"cilium-x6wp2\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " pod="kube-system/cilium-x6wp2" Oct 8 19:28:17.014755 kubelet[2532]: I1008 19:28:17.014755 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmgbn\" (UniqueName: \"kubernetes.io/projected/094b44ed-5840-45e3-89b2-56cb08d76998-kube-api-access-nmgbn\") pod \"kube-proxy-twffk\" (UID: \"094b44ed-5840-45e3-89b2-56cb08d76998\") " pod="kube-system/kube-proxy-twffk" Oct 8 19:28:17.014947 kubelet[2532]: I1008 19:28:17.014780 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-bpf-maps\") pod \"cilium-x6wp2\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " pod="kube-system/cilium-x6wp2" Oct 8 19:28:17.014947 kubelet[2532]: I1008 19:28:17.014829 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-hubble-tls\") pod \"cilium-x6wp2\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " pod="kube-system/cilium-x6wp2" Oct 8 19:28:17.014947 kubelet[2532]: I1008 19:28:17.014864 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/094b44ed-5840-45e3-89b2-56cb08d76998-kube-proxy\") pod \"kube-proxy-twffk\" (UID: \"094b44ed-5840-45e3-89b2-56cb08d76998\") " pod="kube-system/kube-proxy-twffk" Oct 8 19:28:17.014947 kubelet[2532]: I1008 19:28:17.014894 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-etc-cni-netd\") pod \"cilium-x6wp2\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " pod="kube-system/cilium-x6wp2" Oct 8 19:28:17.014947 kubelet[2532]: I1008 19:28:17.014927 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-host-proc-sys-kernel\") pod \"cilium-x6wp2\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " pod="kube-system/cilium-x6wp2" Oct 8 19:28:17.015054 kubelet[2532]: I1008 19:28:17.014961 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/094b44ed-5840-45e3-89b2-56cb08d76998-lib-modules\") pod \"kube-proxy-twffk\" (UID: \"094b44ed-5840-45e3-89b2-56cb08d76998\") " pod="kube-system/kube-proxy-twffk" Oct 8 19:28:17.015054 kubelet[2532]: I1008 19:28:17.014979 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-host-proc-sys-net\") pod \"cilium-x6wp2\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " pod="kube-system/cilium-x6wp2" Oct 8 19:28:17.015054 kubelet[2532]: I1008 19:28:17.015006 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cilium-run\") pod \"cilium-x6wp2\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " pod="kube-system/cilium-x6wp2" Oct 8 19:28:17.015117 kubelet[2532]: I1008 19:28:17.015050 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqj8c\" (UniqueName: \"kubernetes.io/projected/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-kube-api-access-jqj8c\") pod \"cilium-x6wp2\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " pod="kube-system/cilium-x6wp2" Oct 8 19:28:17.015117 kubelet[2532]: I1008 19:28:17.015086 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-clustermesh-secrets\") pod \"cilium-x6wp2\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " pod="kube-system/cilium-x6wp2" Oct 8 19:28:17.015117 kubelet[2532]: I1008 19:28:17.015106 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cilium-config-path\") pod \"cilium-x6wp2\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " pod="kube-system/cilium-x6wp2" Oct 8 19:28:17.015181 kubelet[2532]: I1008 19:28:17.015129 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-hostproc\") pod \"cilium-x6wp2\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " pod="kube-system/cilium-x6wp2" Oct 8 19:28:17.015181 kubelet[2532]: I1008 19:28:17.015155 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cilium-cgroup\") pod \"cilium-x6wp2\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " pod="kube-system/cilium-x6wp2" Oct 8 19:28:17.015223 kubelet[2532]: I1008 19:28:17.015185 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/094b44ed-5840-45e3-89b2-56cb08d76998-xtables-lock\") pod \"kube-proxy-twffk\" (UID: \"094b44ed-5840-45e3-89b2-56cb08d76998\") " pod="kube-system/kube-proxy-twffk" Oct 8 19:28:17.015223 kubelet[2532]: I1008 19:28:17.015207 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cni-path\") pod \"cilium-x6wp2\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " pod="kube-system/cilium-x6wp2" Oct 8 19:28:17.015267 kubelet[2532]: I1008 19:28:17.015230 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-xtables-lock\") pod \"cilium-x6wp2\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " pod="kube-system/cilium-x6wp2" Oct 8 19:28:17.058163 kubelet[2532]: I1008 19:28:17.057945 2532 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:28:17.058421 containerd[1447]: time="2024-10-08T19:28:17.058311602Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:28:17.058867 kubelet[2532]: I1008 19:28:17.058479 2532 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:28:17.072875 kubelet[2532]: I1008 19:28:17.072813 2532 topology_manager.go:215] "Topology Admit Handler" podUID="5f4d201c-7278-4224-972c-05bdd7248a44" podNamespace="kube-system" podName="cilium-operator-5cc964979-vwjqq" Oct 8 19:28:17.081092 systemd[1]: Created slice kubepods-besteffort-pod5f4d201c_7278_4224_972c_05bdd7248a44.slice - libcontainer container kubepods-besteffort-pod5f4d201c_7278_4224_972c_05bdd7248a44.slice. Oct 8 19:28:17.115824 kubelet[2532]: I1008 19:28:17.115625 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4jxr\" (UniqueName: \"kubernetes.io/projected/5f4d201c-7278-4224-972c-05bdd7248a44-kube-api-access-x4jxr\") pod \"cilium-operator-5cc964979-vwjqq\" (UID: \"5f4d201c-7278-4224-972c-05bdd7248a44\") " pod="kube-system/cilium-operator-5cc964979-vwjqq" Oct 8 19:28:17.115824 kubelet[2532]: I1008 19:28:17.115769 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f4d201c-7278-4224-972c-05bdd7248a44-cilium-config-path\") pod \"cilium-operator-5cc964979-vwjqq\" (UID: \"5f4d201c-7278-4224-972c-05bdd7248a44\") " pod="kube-system/cilium-operator-5cc964979-vwjqq" Oct 8 19:28:17.286452 kubelet[2532]: E1008 19:28:17.286408 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:17.287322 containerd[1447]: time="2024-10-08T19:28:17.286924556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-twffk,Uid:094b44ed-5840-45e3-89b2-56cb08d76998,Namespace:kube-system,Attempt:0,}" Oct 8 19:28:17.301532 kubelet[2532]: E1008 19:28:17.301215 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:17.301924 containerd[1447]: time="2024-10-08T19:28:17.301747927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x6wp2,Uid:1c80d9b7-0d35-4215-a6dc-bf743dd049e8,Namespace:kube-system,Attempt:0,}" Oct 8 19:28:17.305903 containerd[1447]: time="2024-10-08T19:28:17.305785830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:28:17.306086 containerd[1447]: time="2024-10-08T19:28:17.305945340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:28:17.306086 containerd[1447]: time="2024-10-08T19:28:17.305986467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:28:17.306086 containerd[1447]: time="2024-10-08T19:28:17.306012712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:28:17.319832 containerd[1447]: time="2024-10-08T19:28:17.318580907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:28:17.319832 containerd[1447]: time="2024-10-08T19:28:17.318707611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:28:17.319832 containerd[1447]: time="2024-10-08T19:28:17.318732855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:28:17.319832 containerd[1447]: time="2024-10-08T19:28:17.318747258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:28:17.324976 systemd[1]: Started cri-containerd-e19228b99463319c1313f0033006af7ddb90be836c5979d5e870f4f897bbfeb2.scope - libcontainer container e19228b99463319c1313f0033006af7ddb90be836c5979d5e870f4f897bbfeb2. Oct 8 19:28:17.334761 systemd[1]: Started cri-containerd-47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a.scope - libcontainer container 47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a. Oct 8 19:28:17.355272 containerd[1447]: time="2024-10-08T19:28:17.354680718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-twffk,Uid:094b44ed-5840-45e3-89b2-56cb08d76998,Namespace:kube-system,Attempt:0,} returns sandbox id \"e19228b99463319c1313f0033006af7ddb90be836c5979d5e870f4f897bbfeb2\"" Oct 8 19:28:17.355585 kubelet[2532]: E1008 19:28:17.355565 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:17.358137 containerd[1447]: time="2024-10-08T19:28:17.358093866Z" level=info msg="CreateContainer within sandbox \"e19228b99463319c1313f0033006af7ddb90be836c5979d5e870f4f897bbfeb2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:28:17.366025 containerd[1447]: time="2024-10-08T19:28:17.365987240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x6wp2,Uid:1c80d9b7-0d35-4215-a6dc-bf743dd049e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a\"" Oct 8 19:28:17.367685 kubelet[2532]: E1008 19:28:17.367477 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:17.374463 containerd[1447]: time="2024-10-08T19:28:17.374418674Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 8 19:28:17.380826 containerd[1447]: time="2024-10-08T19:28:17.379959974Z" level=info msg="CreateContainer within sandbox \"e19228b99463319c1313f0033006af7ddb90be836c5979d5e870f4f897bbfeb2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"14a6b3251af5374810ae91639f410ac4360d4192aef9dc14627769c14e994553\"" Oct 8 19:28:17.381219 containerd[1447]: time="2024-10-08T19:28:17.381103945Z" level=info msg="StartContainer for \"14a6b3251af5374810ae91639f410ac4360d4192aef9dc14627769c14e994553\"" Oct 8 19:28:17.384547 kubelet[2532]: E1008 19:28:17.384519 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:17.385505 containerd[1447]: time="2024-10-08T19:28:17.385107403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-vwjqq,Uid:5f4d201c-7278-4224-972c-05bdd7248a44,Namespace:kube-system,Attempt:0,}" Oct 8 19:28:17.406961 containerd[1447]: time="2024-10-08T19:28:17.406324071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:28:17.406961 containerd[1447]: time="2024-10-08T19:28:17.406903898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:28:17.406961 containerd[1447]: time="2024-10-08T19:28:17.406921901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:28:17.406961 containerd[1447]: time="2024-10-08T19:28:17.406932343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:28:17.406984 systemd[1]: Started cri-containerd-14a6b3251af5374810ae91639f410ac4360d4192aef9dc14627769c14e994553.scope - libcontainer container 14a6b3251af5374810ae91639f410ac4360d4192aef9dc14627769c14e994553. Oct 8 19:28:17.425987 systemd[1]: Started cri-containerd-e2f858a9003f0d7bfb0395c0d0df1fb2a9c249e66414f7d60aeb6b160d36934c.scope - libcontainer container e2f858a9003f0d7bfb0395c0d0df1fb2a9c249e66414f7d60aeb6b160d36934c. Oct 8 19:28:17.445523 containerd[1447]: time="2024-10-08T19:28:17.445473483Z" level=info msg="StartContainer for \"14a6b3251af5374810ae91639f410ac4360d4192aef9dc14627769c14e994553\" returns successfully" Oct 8 19:28:17.464701 containerd[1447]: time="2024-10-08T19:28:17.464629092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-vwjqq,Uid:5f4d201c-7278-4224-972c-05bdd7248a44,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2f858a9003f0d7bfb0395c0d0df1fb2a9c249e66414f7d60aeb6b160d36934c\"" Oct 8 19:28:17.465424 kubelet[2532]: E1008 19:28:17.465399 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:18.375204 kubelet[2532]: E1008 19:28:18.375176 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:18.386575 kubelet[2532]: I1008 19:28:18.385696 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-twffk" podStartSLOduration=2.385650956 podStartE2EDuration="2.385650956s" podCreationTimestamp="2024-10-08 19:28:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:28:18.385462123 +0000 UTC m=+17.149694924" watchObservedRunningTime="2024-10-08 19:28:18.385650956 +0000 UTC m=+17.149883757" Oct 8 19:28:19.378579 kubelet[2532]: E1008 19:28:19.378540 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:24.505394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2868982342.mount: Deactivated successfully. Oct 8 19:28:25.706683 containerd[1447]: time="2024-10-08T19:28:25.706633423Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:28:25.708772 containerd[1447]: time="2024-10-08T19:28:25.708702495Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651486" Oct 8 19:28:25.711493 containerd[1447]: time="2024-10-08T19:28:25.709529083Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:28:25.711493 containerd[1447]: time="2024-10-08T19:28:25.711056483Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.336592282s" Oct 8 19:28:25.711493 containerd[1447]: time="2024-10-08T19:28:25.711085887Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 8 19:28:25.713528 containerd[1447]: time="2024-10-08T19:28:25.713495483Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 8 19:28:25.714834 containerd[1447]: time="2024-10-08T19:28:25.714803975Z" level=info msg="CreateContainer within sandbox \"47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 19:28:25.739877 containerd[1447]: time="2024-10-08T19:28:25.739841498Z" level=info msg="CreateContainer within sandbox \"47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338\"" Oct 8 19:28:25.740351 containerd[1447]: time="2024-10-08T19:28:25.740286996Z" level=info msg="StartContainer for \"71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338\"" Oct 8 19:28:25.775014 systemd[1]: Started cri-containerd-71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338.scope - libcontainer container 71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338. Oct 8 19:28:25.851691 systemd[1]: cri-containerd-71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338.scope: Deactivated successfully. Oct 8 19:28:25.854616 containerd[1447]: time="2024-10-08T19:28:25.854524657Z" level=info msg="StartContainer for \"71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338\" returns successfully" Oct 8 19:28:25.999768 containerd[1447]: time="2024-10-08T19:28:25.995033843Z" level=info msg="shim disconnected" id=71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338 namespace=k8s.io Oct 8 19:28:26.000030 containerd[1447]: time="2024-10-08T19:28:25.999762423Z" level=warning msg="cleaning up after shim disconnected" id=71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338 namespace=k8s.io Oct 8 19:28:26.000030 containerd[1447]: time="2024-10-08T19:28:25.999844114Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:28:26.397038 kubelet[2532]: E1008 19:28:26.396926 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:26.401832 containerd[1447]: time="2024-10-08T19:28:26.401083926Z" level=info msg="CreateContainer within sandbox \"47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 19:28:26.412202 containerd[1447]: time="2024-10-08T19:28:26.412152963Z" level=info msg="CreateContainer within sandbox \"47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042\"" Oct 8 19:28:26.413691 containerd[1447]: time="2024-10-08T19:28:26.412936182Z" level=info msg="StartContainer for \"bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042\"" Oct 8 19:28:26.439935 systemd[1]: Started cri-containerd-bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042.scope - libcontainer container bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042. Oct 8 19:28:26.461207 containerd[1447]: time="2024-10-08T19:28:26.461165790Z" level=info msg="StartContainer for \"bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042\" returns successfully" Oct 8 19:28:26.477571 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:28:26.478264 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:28:26.478327 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:28:26.484870 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:28:26.485058 systemd[1]: cri-containerd-bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042.scope: Deactivated successfully. Oct 8 19:28:26.509202 containerd[1447]: time="2024-10-08T19:28:26.509145686Z" level=info msg="shim disconnected" id=bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042 namespace=k8s.io Oct 8 19:28:26.509202 containerd[1447]: time="2024-10-08T19:28:26.509200533Z" level=warning msg="cleaning up after shim disconnected" id=bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042 namespace=k8s.io Oct 8 19:28:26.509202 containerd[1447]: time="2024-10-08T19:28:26.509209254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:28:26.516775 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:28:26.737916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338-rootfs.mount: Deactivated successfully. Oct 8 19:28:27.398281 kubelet[2532]: E1008 19:28:27.398258 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:27.401739 containerd[1447]: time="2024-10-08T19:28:27.401474918Z" level=info msg="CreateContainer within sandbox \"47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 19:28:27.457652 containerd[1447]: time="2024-10-08T19:28:27.457605545Z" level=info msg="CreateContainer within sandbox \"47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d\"" Oct 8 19:28:27.458336 containerd[1447]: time="2024-10-08T19:28:27.458309671Z" level=info msg="StartContainer for \"b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d\"" Oct 8 19:28:27.489959 systemd[1]: Started cri-containerd-b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d.scope - libcontainer container b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d. Oct 8 19:28:27.519343 containerd[1447]: time="2024-10-08T19:28:27.519258723Z" level=info msg="StartContainer for \"b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d\" returns successfully" Oct 8 19:28:27.531526 systemd[1]: cri-containerd-b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d.scope: Deactivated successfully. Oct 8 19:28:27.588605 containerd[1447]: time="2024-10-08T19:28:27.588348447Z" level=info msg="shim disconnected" id=b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d namespace=k8s.io Oct 8 19:28:27.588605 containerd[1447]: time="2024-10-08T19:28:27.588406494Z" level=warning msg="cleaning up after shim disconnected" id=b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d namespace=k8s.io Oct 8 19:28:27.588605 containerd[1447]: time="2024-10-08T19:28:27.588415455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:28:27.608199 containerd[1447]: time="2024-10-08T19:28:27.608154255Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:28:27.609269 containerd[1447]: time="2024-10-08T19:28:27.609236027Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138334" Oct 8 19:28:27.610049 containerd[1447]: time="2024-10-08T19:28:27.609996719Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:28:27.611838 containerd[1447]: time="2024-10-08T19:28:27.611462418Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.897599487s" Oct 8 19:28:27.611838 containerd[1447]: time="2024-10-08T19:28:27.611499542Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 8 19:28:27.613365 containerd[1447]: time="2024-10-08T19:28:27.613328925Z" level=info msg="CreateContainer within sandbox \"e2f858a9003f0d7bfb0395c0d0df1fb2a9c249e66414f7d60aeb6b160d36934c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 8 19:28:27.629097 containerd[1447]: time="2024-10-08T19:28:27.629055197Z" level=info msg="CreateContainer within sandbox \"e2f858a9003f0d7bfb0395c0d0df1fb2a9c249e66414f7d60aeb6b160d36934c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d\"" Oct 8 19:28:27.630225 containerd[1447]: time="2024-10-08T19:28:27.629432203Z" level=info msg="StartContainer for \"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d\"" Oct 8 19:28:27.651934 systemd[1]: Started cri-containerd-8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d.scope - libcontainer container 8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d. Oct 8 19:28:27.672880 containerd[1447]: time="2024-10-08T19:28:27.672838083Z" level=info msg="StartContainer for \"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d\" returns successfully" Oct 8 19:28:27.738454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d-rootfs.mount: Deactivated successfully. Oct 8 19:28:28.403888 kubelet[2532]: E1008 19:28:28.403852 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:28.406223 kubelet[2532]: E1008 19:28:28.406200 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:28.408232 containerd[1447]: time="2024-10-08T19:28:28.408050065Z" level=info msg="CreateContainer within sandbox \"47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 19:28:28.427579 containerd[1447]: time="2024-10-08T19:28:28.427455462Z" level=info msg="CreateContainer within sandbox \"47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357\"" Oct 8 19:28:28.428856 containerd[1447]: time="2024-10-08T19:28:28.428065813Z" level=info msg="StartContainer for \"0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357\"" Oct 8 19:28:28.431917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2265155677.mount: Deactivated successfully. Oct 8 19:28:28.464839 kubelet[2532]: I1008 19:28:28.464483 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-vwjqq" podStartSLOduration=1.319157594 podStartE2EDuration="11.464428359s" podCreationTimestamp="2024-10-08 19:28:17 +0000 UTC" firstStartedPulling="2024-10-08 19:28:17.466721997 +0000 UTC m=+16.230954798" lastFinishedPulling="2024-10-08 19:28:27.611992762 +0000 UTC m=+26.376225563" observedRunningTime="2024-10-08 19:28:28.440926682 +0000 UTC m=+27.205159483" watchObservedRunningTime="2024-10-08 19:28:28.464428359 +0000 UTC m=+27.228661160" Oct 8 19:28:28.465981 systemd[1]: Started cri-containerd-0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357.scope - libcontainer container 0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357. Oct 8 19:28:28.488393 systemd[1]: cri-containerd-0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357.scope: Deactivated successfully. Oct 8 19:28:28.507431 containerd[1447]: time="2024-10-08T19:28:28.507305669Z" level=info msg="StartContainer for \"0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357\" returns successfully" Oct 8 19:28:28.518841 containerd[1447]: time="2024-10-08T19:28:28.518773494Z" level=info msg="shim disconnected" id=0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357 namespace=k8s.io Oct 8 19:28:28.518841 containerd[1447]: time="2024-10-08T19:28:28.518838102Z" level=warning msg="cleaning up after shim disconnected" id=0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357 namespace=k8s.io Oct 8 19:28:28.518841 containerd[1447]: time="2024-10-08T19:28:28.518847583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:28:28.738004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357-rootfs.mount: Deactivated successfully. Oct 8 19:28:29.410722 kubelet[2532]: E1008 19:28:29.410683 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:29.410722 kubelet[2532]: E1008 19:28:29.410710 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:29.422489 containerd[1447]: time="2024-10-08T19:28:29.422449519Z" level=info msg="CreateContainer within sandbox \"47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 19:28:29.451250 containerd[1447]: time="2024-10-08T19:28:29.451203496Z" level=info msg="CreateContainer within sandbox \"47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5\"" Oct 8 19:28:29.452017 containerd[1447]: time="2024-10-08T19:28:29.451807244Z" level=info msg="StartContainer for \"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5\"" Oct 8 19:28:29.482977 systemd[1]: Started cri-containerd-c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5.scope - libcontainer container c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5. Oct 8 19:28:29.514501 containerd[1447]: time="2024-10-08T19:28:29.514459820Z" level=info msg="StartContainer for \"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5\" returns successfully" Oct 8 19:28:29.657624 kubelet[2532]: I1008 19:28:29.657596 2532 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:28:29.689021 kubelet[2532]: I1008 19:28:29.688913 2532 topology_manager.go:215] "Topology Admit Handler" podUID="2e8786bd-ac1f-49ec-9c98-42c2493ef9ea" podNamespace="kube-system" podName="coredns-76f75df574-nzbxt" Oct 8 19:28:29.689198 kubelet[2532]: I1008 19:28:29.689181 2532 topology_manager.go:215] "Topology Admit Handler" podUID="35f4ef99-90fb-4c2e-a6a6-e8f6ed4ac1bf" podNamespace="kube-system" podName="coredns-76f75df574-qsl9k" Oct 8 19:28:29.699938 systemd[1]: Created slice kubepods-burstable-pod2e8786bd_ac1f_49ec_9c98_42c2493ef9ea.slice - libcontainer container kubepods-burstable-pod2e8786bd_ac1f_49ec_9c98_42c2493ef9ea.slice. Oct 8 19:28:29.702806 kubelet[2532]: I1008 19:28:29.702756 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e8786bd-ac1f-49ec-9c98-42c2493ef9ea-config-volume\") pod \"coredns-76f75df574-nzbxt\" (UID: \"2e8786bd-ac1f-49ec-9c98-42c2493ef9ea\") " pod="kube-system/coredns-76f75df574-nzbxt" Oct 8 19:28:29.702997 kubelet[2532]: I1008 19:28:29.702899 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdbck\" (UniqueName: \"kubernetes.io/projected/2e8786bd-ac1f-49ec-9c98-42c2493ef9ea-kube-api-access-xdbck\") pod \"coredns-76f75df574-nzbxt\" (UID: \"2e8786bd-ac1f-49ec-9c98-42c2493ef9ea\") " pod="kube-system/coredns-76f75df574-nzbxt" Oct 8 19:28:29.702997 kubelet[2532]: I1008 19:28:29.702927 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35f4ef99-90fb-4c2e-a6a6-e8f6ed4ac1bf-config-volume\") pod \"coredns-76f75df574-qsl9k\" (UID: \"35f4ef99-90fb-4c2e-a6a6-e8f6ed4ac1bf\") " pod="kube-system/coredns-76f75df574-qsl9k" Oct 8 19:28:29.703197 kubelet[2532]: I1008 19:28:29.703125 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7xfj\" (UniqueName: \"kubernetes.io/projected/35f4ef99-90fb-4c2e-a6a6-e8f6ed4ac1bf-kube-api-access-w7xfj\") pod \"coredns-76f75df574-qsl9k\" (UID: \"35f4ef99-90fb-4c2e-a6a6-e8f6ed4ac1bf\") " pod="kube-system/coredns-76f75df574-qsl9k" Oct 8 19:28:29.709786 systemd[1]: Created slice kubepods-burstable-pod35f4ef99_90fb_4c2e_a6a6_e8f6ed4ac1bf.slice - libcontainer container kubepods-burstable-pod35f4ef99_90fb_4c2e_a6a6_e8f6ed4ac1bf.slice. Oct 8 19:28:30.006478 kubelet[2532]: E1008 19:28:30.006350 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:30.008169 containerd[1447]: time="2024-10-08T19:28:30.008123147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nzbxt,Uid:2e8786bd-ac1f-49ec-9c98-42c2493ef9ea,Namespace:kube-system,Attempt:0,}" Oct 8 19:28:30.012918 kubelet[2532]: E1008 19:28:30.012884 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:30.013451 containerd[1447]: time="2024-10-08T19:28:30.013299834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qsl9k,Uid:35f4ef99-90fb-4c2e-a6a6-e8f6ed4ac1bf,Namespace:kube-system,Attempt:0,}" Oct 8 19:28:30.415809 kubelet[2532]: E1008 19:28:30.415624 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:30.430913 kubelet[2532]: I1008 19:28:30.430852 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-x6wp2" podStartSLOduration=6.086173474 podStartE2EDuration="14.43080658s" podCreationTimestamp="2024-10-08 19:28:16 +0000 UTC" firstStartedPulling="2024-10-08 19:28:17.368230174 +0000 UTC m=+16.132462935" lastFinishedPulling="2024-10-08 19:28:25.71286324 +0000 UTC m=+24.477096041" observedRunningTime="2024-10-08 19:28:30.429604568 +0000 UTC m=+29.193837409" watchObservedRunningTime="2024-10-08 19:28:30.43080658 +0000 UTC m=+29.195039381" Oct 8 19:28:31.418226 kubelet[2532]: E1008 19:28:31.418201 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:31.577777 systemd[1]: Started sshd@7-10.0.0.22:22-10.0.0.1:43174.service - OpenSSH per-connection server daemon (10.0.0.1:43174). Oct 8 19:28:31.616924 sshd[3378]: Accepted publickey for core from 10.0.0.1 port 43174 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:28:31.618379 sshd[3378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:28:31.622780 systemd-logind[1425]: New session 8 of user core. Oct 8 19:28:31.637939 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:28:31.755531 sshd[3378]: pam_unix(sshd:session): session closed for user core Oct 8 19:28:31.758071 systemd[1]: sshd@7-10.0.0.22:22-10.0.0.1:43174.service: Deactivated successfully. Oct 8 19:28:31.759756 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:28:31.761191 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:28:31.762106 systemd-logind[1425]: Removed session 8. Oct 8 19:28:31.840207 systemd-networkd[1365]: cilium_host: Link UP Oct 8 19:28:31.840513 systemd-networkd[1365]: cilium_net: Link UP Oct 8 19:28:31.840516 systemd-networkd[1365]: cilium_net: Gained carrier Oct 8 19:28:31.840675 systemd-networkd[1365]: cilium_host: Gained carrier Oct 8 19:28:31.925236 systemd-networkd[1365]: cilium_vxlan: Link UP Oct 8 19:28:31.925242 systemd-networkd[1365]: cilium_vxlan: Gained carrier Oct 8 19:28:32.222825 kernel: NET: Registered PF_ALG protocol family Oct 8 19:28:32.249924 systemd-networkd[1365]: cilium_host: Gained IPv6LL Oct 8 19:28:32.420299 kubelet[2532]: E1008 19:28:32.420261 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:32.688083 systemd-networkd[1365]: cilium_net: Gained IPv6LL Oct 8 19:28:32.784338 systemd-networkd[1365]: lxc_health: Link UP Oct 8 19:28:32.797997 systemd-networkd[1365]: lxc_health: Gained carrier Oct 8 19:28:33.171926 systemd-networkd[1365]: lxc91475bccd747: Link UP Oct 8 19:28:33.174655 systemd-networkd[1365]: lxcf0408e26aef8: Link UP Oct 8 19:28:33.187819 kernel: eth0: renamed from tmpa70d1 Oct 8 19:28:33.193807 kernel: eth0: renamed from tmpb65bd Oct 8 19:28:33.203701 systemd-networkd[1365]: lxcf0408e26aef8: Gained carrier Oct 8 19:28:33.210887 systemd-networkd[1365]: lxc91475bccd747: Gained carrier Oct 8 19:28:33.421809 kubelet[2532]: E1008 19:28:33.421768 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:33.839984 systemd-networkd[1365]: cilium_vxlan: Gained IPv6LL Oct 8 19:28:34.352944 systemd-networkd[1365]: lxc_health: Gained IPv6LL Oct 8 19:28:34.416032 systemd-networkd[1365]: lxcf0408e26aef8: Gained IPv6LL Oct 8 19:28:34.423971 kubelet[2532]: E1008 19:28:34.423946 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:34.671994 systemd-networkd[1365]: lxc91475bccd747: Gained IPv6LL Oct 8 19:28:35.425533 kubelet[2532]: E1008 19:28:35.425507 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:36.767744 systemd[1]: Started sshd@8-10.0.0.22:22-10.0.0.1:40318.service - OpenSSH per-connection server daemon (10.0.0.1:40318). Oct 8 19:28:36.773071 containerd[1447]: time="2024-10-08T19:28:36.772968250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:28:36.773071 containerd[1447]: time="2024-10-08T19:28:36.773051218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:28:36.773531 containerd[1447]: time="2024-10-08T19:28:36.773086181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:28:36.773531 containerd[1447]: time="2024-10-08T19:28:36.773454775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:28:36.781252 containerd[1447]: time="2024-10-08T19:28:36.781167398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:28:36.781252 containerd[1447]: time="2024-10-08T19:28:36.781227084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:28:36.781252 containerd[1447]: time="2024-10-08T19:28:36.781244845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:28:36.781421 containerd[1447]: time="2024-10-08T19:28:36.781255966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:28:36.799978 systemd[1]: Started cri-containerd-b65bd91bd9f941af58a1b0b08feb14474e03db0dd830168ff109acbd3455e1c2.scope - libcontainer container b65bd91bd9f941af58a1b0b08feb14474e03db0dd830168ff109acbd3455e1c2. Oct 8 19:28:36.804437 systemd[1]: Started cri-containerd-a70d198437a26d2135bf7bc8fbdd8251af9044706dc898c2fd04acbdba31d46d.scope - libcontainer container a70d198437a26d2135bf7bc8fbdd8251af9044706dc898c2fd04acbdba31d46d. Oct 8 19:28:36.812531 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:28:36.818117 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 8 19:28:36.820751 sshd[3792]: Accepted publickey for core from 10.0.0.1 port 40318 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:28:36.823730 sshd[3792]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:28:36.828884 systemd-logind[1425]: New session 9 of user core. Oct 8 19:28:36.834054 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:28:36.840211 containerd[1447]: time="2024-10-08T19:28:36.840038528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nzbxt,Uid:2e8786bd-ac1f-49ec-9c98-42c2493ef9ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"a70d198437a26d2135bf7bc8fbdd8251af9044706dc898c2fd04acbdba31d46d\"" Oct 8 19:28:36.841520 containerd[1447]: time="2024-10-08T19:28:36.841484020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qsl9k,Uid:35f4ef99-90fb-4c2e-a6a6-e8f6ed4ac1bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"b65bd91bd9f941af58a1b0b08feb14474e03db0dd830168ff109acbd3455e1c2\"" Oct 8 19:28:36.842075 kubelet[2532]: E1008 19:28:36.842046 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:36.842889 kubelet[2532]: E1008 19:28:36.842464 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:36.846046 containerd[1447]: time="2024-10-08T19:28:36.845321410Z" level=info msg="CreateContainer within sandbox \"a70d198437a26d2135bf7bc8fbdd8251af9044706dc898c2fd04acbdba31d46d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:28:36.846046 containerd[1447]: time="2024-10-08T19:28:36.845869340Z" level=info msg="CreateContainer within sandbox \"b65bd91bd9f941af58a1b0b08feb14474e03db0dd830168ff109acbd3455e1c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:28:36.875054 containerd[1447]: time="2024-10-08T19:28:36.875001117Z" level=info msg="CreateContainer within sandbox \"a70d198437a26d2135bf7bc8fbdd8251af9044706dc898c2fd04acbdba31d46d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff2f60d7cb99b4a91eccfd3036f330ec24f59070cff00624c7e6a9a71ce9031a\"" Oct 8 19:28:36.875794 containerd[1447]: time="2024-10-08T19:28:36.875755065Z" level=info msg="StartContainer for \"ff2f60d7cb99b4a91eccfd3036f330ec24f59070cff00624c7e6a9a71ce9031a\"" Oct 8 19:28:36.879913 containerd[1447]: time="2024-10-08T19:28:36.879862520Z" level=info msg="CreateContainer within sandbox \"b65bd91bd9f941af58a1b0b08feb14474e03db0dd830168ff109acbd3455e1c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f1807c7c744e41c74c519515de70cf71402eb919764655a1035716bb6a6f0cac\"" Oct 8 19:28:36.880550 containerd[1447]: time="2024-10-08T19:28:36.880489257Z" level=info msg="StartContainer for \"f1807c7c744e41c74c519515de70cf71402eb919764655a1035716bb6a6f0cac\"" Oct 8 19:28:36.911986 systemd[1]: Started cri-containerd-ff2f60d7cb99b4a91eccfd3036f330ec24f59070cff00624c7e6a9a71ce9031a.scope - libcontainer container ff2f60d7cb99b4a91eccfd3036f330ec24f59070cff00624c7e6a9a71ce9031a. Oct 8 19:28:36.915382 systemd[1]: Started cri-containerd-f1807c7c744e41c74c519515de70cf71402eb919764655a1035716bb6a6f0cac.scope - libcontainer container f1807c7c744e41c74c519515de70cf71402eb919764655a1035716bb6a6f0cac. Oct 8 19:28:36.949076 containerd[1447]: time="2024-10-08T19:28:36.945708286Z" level=info msg="StartContainer for \"ff2f60d7cb99b4a91eccfd3036f330ec24f59070cff00624c7e6a9a71ce9031a\" returns successfully" Oct 8 19:28:36.954352 containerd[1447]: time="2024-10-08T19:28:36.953693174Z" level=info msg="StartContainer for \"f1807c7c744e41c74c519515de70cf71402eb919764655a1035716bb6a6f0cac\" returns successfully" Oct 8 19:28:36.993704 sshd[3792]: pam_unix(sshd:session): session closed for user core Oct 8 19:28:37.002679 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:28:37.006979 systemd[1]: sshd@8-10.0.0.22:22-10.0.0.1:40318.service: Deactivated successfully. Oct 8 19:28:37.009143 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:28:37.010750 systemd-logind[1425]: Removed session 9. Oct 8 19:28:37.431924 kubelet[2532]: E1008 19:28:37.431828 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:37.435404 kubelet[2532]: E1008 19:28:37.434946 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:37.455400 kubelet[2532]: I1008 19:28:37.455353 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qsl9k" podStartSLOduration=20.455282866 podStartE2EDuration="20.455282866s" podCreationTimestamp="2024-10-08 19:28:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:28:37.454829666 +0000 UTC m=+36.219062467" watchObservedRunningTime="2024-10-08 19:28:37.455282866 +0000 UTC m=+36.219515707" Oct 8 19:28:37.490149 kubelet[2532]: I1008 19:28:37.490109 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-nzbxt" podStartSLOduration=20.490067155 podStartE2EDuration="20.490067155s" podCreationTimestamp="2024-10-08 19:28:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:28:37.489932863 +0000 UTC m=+36.254165664" watchObservedRunningTime="2024-10-08 19:28:37.490067155 +0000 UTC m=+36.254299956" Oct 8 19:28:37.780943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143215252.mount: Deactivated successfully. Oct 8 19:28:38.438231 kubelet[2532]: E1008 19:28:38.438198 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:38.438572 kubelet[2532]: E1008 19:28:38.438322 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:39.439729 kubelet[2532]: E1008 19:28:39.439693 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:39.440146 kubelet[2532]: E1008 19:28:39.440112 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:28:42.005192 systemd[1]: Started sshd@9-10.0.0.22:22-10.0.0.1:40334.service - OpenSSH per-connection server daemon (10.0.0.1:40334). Oct 8 19:28:42.040315 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 40334 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:28:42.041765 sshd[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:28:42.045472 systemd-logind[1425]: New session 10 of user core. Oct 8 19:28:42.051947 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:28:42.162289 sshd[3966]: pam_unix(sshd:session): session closed for user core Oct 8 19:28:42.165516 systemd[1]: sshd@9-10.0.0.22:22-10.0.0.1:40334.service: Deactivated successfully. Oct 8 19:28:42.167210 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:28:42.167870 systemd-logind[1425]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:28:42.168931 systemd-logind[1425]: Removed session 10. Oct 8 19:28:47.176392 systemd[1]: Started sshd@10-10.0.0.22:22-10.0.0.1:45054.service - OpenSSH per-connection server daemon (10.0.0.1:45054). Oct 8 19:28:47.209844 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 45054 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:28:47.211084 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:28:47.215556 systemd-logind[1425]: New session 11 of user core. Oct 8 19:28:47.221984 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:28:47.330685 sshd[3981]: pam_unix(sshd:session): session closed for user core Oct 8 19:28:47.344422 systemd[1]: sshd@10-10.0.0.22:22-10.0.0.1:45054.service: Deactivated successfully. Oct 8 19:28:47.345992 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:28:47.347234 systemd-logind[1425]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:28:47.354121 systemd[1]: Started sshd@11-10.0.0.22:22-10.0.0.1:45064.service - OpenSSH per-connection server daemon (10.0.0.1:45064). Oct 8 19:28:47.355306 systemd-logind[1425]: Removed session 11. Oct 8 19:28:47.383476 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 45064 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:28:47.384715 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:28:47.388820 systemd-logind[1425]: New session 12 of user core. Oct 8 19:28:47.403977 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:28:47.551027 sshd[3996]: pam_unix(sshd:session): session closed for user core Oct 8 19:28:47.562476 systemd[1]: sshd@11-10.0.0.22:22-10.0.0.1:45064.service: Deactivated successfully. Oct 8 19:28:47.566340 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:28:47.568961 systemd-logind[1425]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:28:47.575138 systemd[1]: Started sshd@12-10.0.0.22:22-10.0.0.1:45080.service - OpenSSH per-connection server daemon (10.0.0.1:45080). Oct 8 19:28:47.577126 systemd-logind[1425]: Removed session 12. Oct 8 19:28:47.609258 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 45080 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:28:47.610509 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:28:47.614294 systemd-logind[1425]: New session 13 of user core. Oct 8 19:28:47.622947 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:28:47.732301 sshd[4012]: pam_unix(sshd:session): session closed for user core Oct 8 19:28:47.737130 systemd[1]: sshd@12-10.0.0.22:22-10.0.0.1:45080.service: Deactivated successfully. Oct 8 19:28:47.738675 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:28:47.740701 systemd-logind[1425]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:28:47.741915 systemd-logind[1425]: Removed session 13. Oct 8 19:28:52.743307 systemd[1]: Started sshd@13-10.0.0.22:22-10.0.0.1:38600.service - OpenSSH per-connection server daemon (10.0.0.1:38600). Oct 8 19:28:52.783633 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 38600 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:28:52.785358 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:28:52.789444 systemd-logind[1425]: New session 14 of user core. Oct 8 19:28:52.795935 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:28:52.908460 sshd[4026]: pam_unix(sshd:session): session closed for user core Oct 8 19:28:52.911323 systemd-logind[1425]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:28:52.911584 systemd[1]: sshd@13-10.0.0.22:22-10.0.0.1:38600.service: Deactivated successfully. Oct 8 19:28:52.913184 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:28:52.914668 systemd-logind[1425]: Removed session 14. Oct 8 19:28:57.922092 systemd[1]: Started sshd@14-10.0.0.22:22-10.0.0.1:38612.service - OpenSSH per-connection server daemon (10.0.0.1:38612). Oct 8 19:28:57.955579 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 38612 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:28:57.956192 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:28:57.962057 systemd-logind[1425]: New session 15 of user core. Oct 8 19:28:57.972967 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:28:58.092164 sshd[4040]: pam_unix(sshd:session): session closed for user core Oct 8 19:28:58.105671 systemd[1]: sshd@14-10.0.0.22:22-10.0.0.1:38612.service: Deactivated successfully. Oct 8 19:28:58.107230 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:28:58.111737 systemd-logind[1425]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:28:58.128161 systemd[1]: Started sshd@15-10.0.0.22:22-10.0.0.1:38616.service - OpenSSH per-connection server daemon (10.0.0.1:38616). Oct 8 19:28:58.134972 systemd-logind[1425]: Removed session 15. Oct 8 19:28:58.167497 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 38616 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:28:58.169577 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:28:58.175362 systemd-logind[1425]: New session 16 of user core. Oct 8 19:28:58.187955 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:28:58.458761 sshd[4054]: pam_unix(sshd:session): session closed for user core Oct 8 19:28:58.467251 systemd[1]: sshd@15-10.0.0.22:22-10.0.0.1:38616.service: Deactivated successfully. Oct 8 19:28:58.468874 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:28:58.474037 systemd-logind[1425]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:28:58.486091 systemd[1]: Started sshd@16-10.0.0.22:22-10.0.0.1:38618.service - OpenSSH per-connection server daemon (10.0.0.1:38618). Oct 8 19:28:58.487438 systemd-logind[1425]: Removed session 16. Oct 8 19:28:58.520361 sshd[4066]: Accepted publickey for core from 10.0.0.1 port 38618 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:28:58.521568 sshd[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:28:58.527360 systemd-logind[1425]: New session 17 of user core. Oct 8 19:28:58.531955 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:28:59.859306 sshd[4066]: pam_unix(sshd:session): session closed for user core Oct 8 19:28:59.866560 systemd[1]: sshd@16-10.0.0.22:22-10.0.0.1:38618.service: Deactivated successfully. Oct 8 19:28:59.870114 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:28:59.872228 systemd-logind[1425]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:28:59.881345 systemd[1]: Started sshd@17-10.0.0.22:22-10.0.0.1:38622.service - OpenSSH per-connection server daemon (10.0.0.1:38622). Oct 8 19:28:59.882382 systemd-logind[1425]: Removed session 17. Oct 8 19:28:59.915724 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 38622 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:28:59.917241 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:28:59.921728 systemd-logind[1425]: New session 18 of user core. Oct 8 19:28:59.933932 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:29:00.154476 sshd[4086]: pam_unix(sshd:session): session closed for user core Oct 8 19:29:00.171293 systemd[1]: sshd@17-10.0.0.22:22-10.0.0.1:38622.service: Deactivated successfully. Oct 8 19:29:00.172857 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:29:00.174278 systemd-logind[1425]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:29:00.181648 systemd[1]: Started sshd@18-10.0.0.22:22-10.0.0.1:38626.service - OpenSSH per-connection server daemon (10.0.0.1:38626). Oct 8 19:29:00.184172 systemd-logind[1425]: Removed session 18. Oct 8 19:29:00.212343 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 38626 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:29:00.213710 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:00.217204 systemd-logind[1425]: New session 19 of user core. Oct 8 19:29:00.223956 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:29:00.330681 sshd[4099]: pam_unix(sshd:session): session closed for user core Oct 8 19:29:00.334941 systemd[1]: sshd@18-10.0.0.22:22-10.0.0.1:38626.service: Deactivated successfully. Oct 8 19:29:00.336645 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:29:00.337271 systemd-logind[1425]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:29:00.338106 systemd-logind[1425]: Removed session 19. Oct 8 19:29:05.345380 systemd[1]: Started sshd@19-10.0.0.22:22-10.0.0.1:33006.service - OpenSSH per-connection server daemon (10.0.0.1:33006). Oct 8 19:29:05.379470 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 33006 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:29:05.380894 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:05.384410 systemd-logind[1425]: New session 20 of user core. Oct 8 19:29:05.390965 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:29:05.494748 sshd[4118]: pam_unix(sshd:session): session closed for user core Oct 8 19:29:05.498627 systemd[1]: sshd@19-10.0.0.22:22-10.0.0.1:33006.service: Deactivated successfully. Oct 8 19:29:05.500255 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:29:05.500783 systemd-logind[1425]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:29:05.501720 systemd-logind[1425]: Removed session 20. Oct 8 19:29:10.505167 systemd[1]: Started sshd@20-10.0.0.22:22-10.0.0.1:33016.service - OpenSSH per-connection server daemon (10.0.0.1:33016). Oct 8 19:29:10.538742 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 33016 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:29:10.540007 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:10.543396 systemd-logind[1425]: New session 21 of user core. Oct 8 19:29:10.549930 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:29:10.652233 sshd[4133]: pam_unix(sshd:session): session closed for user core Oct 8 19:29:10.655404 systemd[1]: sshd@20-10.0.0.22:22-10.0.0.1:33016.service: Deactivated successfully. Oct 8 19:29:10.657048 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:29:10.657628 systemd-logind[1425]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:29:10.658443 systemd-logind[1425]: Removed session 21. Oct 8 19:29:15.663753 systemd[1]: Started sshd@21-10.0.0.22:22-10.0.0.1:45240.service - OpenSSH per-connection server daemon (10.0.0.1:45240). Oct 8 19:29:15.700112 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 45240 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:29:15.701497 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:15.705312 systemd-logind[1425]: New session 22 of user core. Oct 8 19:29:15.712954 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 19:29:15.828627 sshd[4147]: pam_unix(sshd:session): session closed for user core Oct 8 19:29:15.839217 systemd[1]: sshd@21-10.0.0.22:22-10.0.0.1:45240.service: Deactivated successfully. Oct 8 19:29:15.840976 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 19:29:15.842705 systemd-logind[1425]: Session 22 logged out. Waiting for processes to exit. Oct 8 19:29:15.853105 systemd[1]: Started sshd@22-10.0.0.22:22-10.0.0.1:45252.service - OpenSSH per-connection server daemon (10.0.0.1:45252). Oct 8 19:29:15.854688 systemd-logind[1425]: Removed session 22. Oct 8 19:29:15.883972 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 45252 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:29:15.885299 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:15.889047 systemd-logind[1425]: New session 23 of user core. Oct 8 19:29:15.897957 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 19:29:17.756178 containerd[1447]: time="2024-10-08T19:29:17.756114071Z" level=info msg="StopContainer for \"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d\" with timeout 30 (s)" Oct 8 19:29:17.758279 containerd[1447]: time="2024-10-08T19:29:17.758239180Z" level=info msg="Stop container \"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d\" with signal terminated" Oct 8 19:29:17.769051 systemd[1]: cri-containerd-8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d.scope: Deactivated successfully. Oct 8 19:29:17.778710 containerd[1447]: time="2024-10-08T19:29:17.778671615Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:29:17.785721 containerd[1447]: time="2024-10-08T19:29:17.785679009Z" level=info msg="StopContainer for \"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5\" with timeout 2 (s)" Oct 8 19:29:17.785972 containerd[1447]: time="2024-10-08T19:29:17.785946083Z" level=info msg="Stop container \"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5\" with signal terminated" Oct 8 19:29:17.790252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d-rootfs.mount: Deactivated successfully. Oct 8 19:29:17.793015 systemd-networkd[1365]: lxc_health: Link DOWN Oct 8 19:29:17.793034 systemd-networkd[1365]: lxc_health: Lost carrier Oct 8 19:29:17.796107 containerd[1447]: time="2024-10-08T19:29:17.796044523Z" level=info msg="shim disconnected" id=8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d namespace=k8s.io Oct 8 19:29:17.796107 containerd[1447]: time="2024-10-08T19:29:17.796097682Z" level=warning msg="cleaning up after shim disconnected" id=8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d namespace=k8s.io Oct 8 19:29:17.796107 containerd[1447]: time="2024-10-08T19:29:17.796107322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:29:17.816997 systemd[1]: cri-containerd-c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5.scope: Deactivated successfully. Oct 8 19:29:17.817256 systemd[1]: cri-containerd-c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5.scope: Consumed 6.559s CPU time. Oct 8 19:29:17.836550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5-rootfs.mount: Deactivated successfully. Oct 8 19:29:17.845868 containerd[1447]: time="2024-10-08T19:29:17.845473750Z" level=info msg="StopContainer for \"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d\" returns successfully" Oct 8 19:29:17.850824 containerd[1447]: time="2024-10-08T19:29:17.849003266Z" level=info msg="StopPodSandbox for \"e2f858a9003f0d7bfb0395c0d0df1fb2a9c249e66414f7d60aeb6b160d36934c\"" Oct 8 19:29:17.850824 containerd[1447]: time="2024-10-08T19:29:17.849066384Z" level=info msg="Container to stop \"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:29:17.851226 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2f858a9003f0d7bfb0395c0d0df1fb2a9c249e66414f7d60aeb6b160d36934c-shm.mount: Deactivated successfully. Oct 8 19:29:17.853311 containerd[1447]: time="2024-10-08T19:29:17.853256965Z" level=info msg="shim disconnected" id=c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5 namespace=k8s.io Oct 8 19:29:17.853311 containerd[1447]: time="2024-10-08T19:29:17.853310284Z" level=warning msg="cleaning up after shim disconnected" id=c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5 namespace=k8s.io Oct 8 19:29:17.853426 containerd[1447]: time="2024-10-08T19:29:17.853318883Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:29:17.857139 systemd[1]: cri-containerd-e2f858a9003f0d7bfb0395c0d0df1fb2a9c249e66414f7d60aeb6b160d36934c.scope: Deactivated successfully. Oct 8 19:29:17.868936 containerd[1447]: time="2024-10-08T19:29:17.868883434Z" level=info msg="StopContainer for \"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5\" returns successfully" Oct 8 19:29:17.869915 containerd[1447]: time="2024-10-08T19:29:17.869874810Z" level=info msg="StopPodSandbox for \"47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a\"" Oct 8 19:29:17.869999 containerd[1447]: time="2024-10-08T19:29:17.869931289Z" level=info msg="Container to stop \"b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:29:17.869999 containerd[1447]: time="2024-10-08T19:29:17.869969408Z" level=info msg="Container to stop \"bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:29:17.869999 containerd[1447]: time="2024-10-08T19:29:17.869980288Z" level=info msg="Container to stop \"0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:29:17.869999 containerd[1447]: time="2024-10-08T19:29:17.869990368Z" level=info msg="Container to stop \"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:29:17.870133 containerd[1447]: time="2024-10-08T19:29:17.870002407Z" level=info msg="Container to stop \"71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:29:17.871784 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a-shm.mount: Deactivated successfully. Oct 8 19:29:17.878230 systemd[1]: cri-containerd-47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a.scope: Deactivated successfully. Oct 8 19:29:17.881620 containerd[1447]: time="2024-10-08T19:29:17.880924228Z" level=info msg="shim disconnected" id=e2f858a9003f0d7bfb0395c0d0df1fb2a9c249e66414f7d60aeb6b160d36934c namespace=k8s.io Oct 8 19:29:17.881620 containerd[1447]: time="2024-10-08T19:29:17.880983387Z" level=warning msg="cleaning up after shim disconnected" id=e2f858a9003f0d7bfb0395c0d0df1fb2a9c249e66414f7d60aeb6b160d36934c namespace=k8s.io Oct 8 19:29:17.881620 containerd[1447]: time="2024-10-08T19:29:17.880992347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:29:17.895270 containerd[1447]: time="2024-10-08T19:29:17.895215969Z" level=warning msg="cleanup warnings time=\"2024-10-08T19:29:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 19:29:17.897716 containerd[1447]: time="2024-10-08T19:29:17.897678870Z" level=info msg="TearDown network for sandbox \"e2f858a9003f0d7bfb0395c0d0df1fb2a9c249e66414f7d60aeb6b160d36934c\" successfully" Oct 8 19:29:17.897716 containerd[1447]: time="2024-10-08T19:29:17.897711470Z" level=info msg="StopPodSandbox for \"e2f858a9003f0d7bfb0395c0d0df1fb2a9c249e66414f7d60aeb6b160d36934c\" returns successfully" Oct 8 19:29:17.905300 containerd[1447]: time="2024-10-08T19:29:17.905238411Z" level=info msg="shim disconnected" id=47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a namespace=k8s.io Oct 8 19:29:17.906672 containerd[1447]: time="2024-10-08T19:29:17.906638698Z" level=warning msg="cleaning up after shim disconnected" id=47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a namespace=k8s.io Oct 8 19:29:17.906752 containerd[1447]: time="2024-10-08T19:29:17.906738375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:29:17.919076 containerd[1447]: time="2024-10-08T19:29:17.918916566Z" level=info msg="TearDown network for sandbox \"47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a\" successfully" Oct 8 19:29:17.919076 containerd[1447]: time="2024-10-08T19:29:17.918954885Z" level=info msg="StopPodSandbox for \"47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a\" returns successfully" Oct 8 19:29:17.980557 kubelet[2532]: I1008 19:29:17.980515 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-bpf-maps\") pod \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " Oct 8 19:29:17.980557 kubelet[2532]: I1008 19:29:17.980564 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cilium-config-path\") pod \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " Oct 8 19:29:17.980968 kubelet[2532]: I1008 19:29:17.980588 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-clustermesh-secrets\") pod \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " Oct 8 19:29:17.980968 kubelet[2532]: I1008 19:29:17.980612 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-hubble-tls\") pod \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " Oct 8 19:29:17.980968 kubelet[2532]: I1008 19:29:17.980630 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-etc-cni-netd\") pod \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " Oct 8 19:29:17.980968 kubelet[2532]: I1008 19:29:17.980649 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqj8c\" (UniqueName: \"kubernetes.io/projected/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-kube-api-access-jqj8c\") pod \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " Oct 8 19:29:17.980968 kubelet[2532]: I1008 19:29:17.980665 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cni-path\") pod \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " Oct 8 19:29:17.980968 kubelet[2532]: I1008 19:29:17.980685 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f4d201c-7278-4224-972c-05bdd7248a44-cilium-config-path\") pod \"5f4d201c-7278-4224-972c-05bdd7248a44\" (UID: \"5f4d201c-7278-4224-972c-05bdd7248a44\") " Oct 8 19:29:17.981160 kubelet[2532]: I1008 19:29:17.980705 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-lib-modules\") pod \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " Oct 8 19:29:17.981160 kubelet[2532]: I1008 19:29:17.980722 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-hostproc\") pod \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " Oct 8 19:29:17.981160 kubelet[2532]: I1008 19:29:17.980741 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cilium-run\") pod \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " Oct 8 19:29:17.981160 kubelet[2532]: I1008 19:29:17.980760 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-host-proc-sys-kernel\") pod \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " Oct 8 19:29:17.981160 kubelet[2532]: I1008 19:29:17.980777 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cilium-cgroup\") pod \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " Oct 8 19:29:17.981160 kubelet[2532]: I1008 19:29:17.980810 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-host-proc-sys-net\") pod \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " Oct 8 19:29:17.981292 kubelet[2532]: I1008 19:29:17.980829 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-xtables-lock\") pod \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\" (UID: \"1c80d9b7-0d35-4215-a6dc-bf743dd049e8\") " Oct 8 19:29:17.981292 kubelet[2532]: I1008 19:29:17.980864 2532 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4jxr\" (UniqueName: \"kubernetes.io/projected/5f4d201c-7278-4224-972c-05bdd7248a44-kube-api-access-x4jxr\") pod \"5f4d201c-7278-4224-972c-05bdd7248a44\" (UID: \"5f4d201c-7278-4224-972c-05bdd7248a44\") " Oct 8 19:29:17.983550 kubelet[2532]: I1008 19:29:17.983182 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cni-path" (OuterVolumeSpecName: "cni-path") pod "1c80d9b7-0d35-4215-a6dc-bf743dd049e8" (UID: "1c80d9b7-0d35-4215-a6dc-bf743dd049e8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:29:17.983550 kubelet[2532]: I1008 19:29:17.983203 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1c80d9b7-0d35-4215-a6dc-bf743dd049e8" (UID: "1c80d9b7-0d35-4215-a6dc-bf743dd049e8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:29:17.983550 kubelet[2532]: I1008 19:29:17.983239 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1c80d9b7-0d35-4215-a6dc-bf743dd049e8" (UID: "1c80d9b7-0d35-4215-a6dc-bf743dd049e8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:29:17.984752 kubelet[2532]: I1008 19:29:17.984438 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1c80d9b7-0d35-4215-a6dc-bf743dd049e8" (UID: "1c80d9b7-0d35-4215-a6dc-bf743dd049e8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:29:17.984752 kubelet[2532]: I1008 19:29:17.984511 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-hostproc" (OuterVolumeSpecName: "hostproc") pod "1c80d9b7-0d35-4215-a6dc-bf743dd049e8" (UID: "1c80d9b7-0d35-4215-a6dc-bf743dd049e8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:29:17.985918 kubelet[2532]: I1008 19:29:17.985881 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1c80d9b7-0d35-4215-a6dc-bf743dd049e8" (UID: "1c80d9b7-0d35-4215-a6dc-bf743dd049e8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:29:17.985982 kubelet[2532]: I1008 19:29:17.985933 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1c80d9b7-0d35-4215-a6dc-bf743dd049e8" (UID: "1c80d9b7-0d35-4215-a6dc-bf743dd049e8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:29:17.987883 kubelet[2532]: I1008 19:29:17.987849 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f4d201c-7278-4224-972c-05bdd7248a44-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5f4d201c-7278-4224-972c-05bdd7248a44" (UID: "5f4d201c-7278-4224-972c-05bdd7248a44"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:29:17.988030 kubelet[2532]: I1008 19:29:17.987943 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1c80d9b7-0d35-4215-a6dc-bf743dd049e8" (UID: "1c80d9b7-0d35-4215-a6dc-bf743dd049e8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:29:17.988030 kubelet[2532]: I1008 19:29:17.987906 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1c80d9b7-0d35-4215-a6dc-bf743dd049e8" (UID: "1c80d9b7-0d35-4215-a6dc-bf743dd049e8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:29:17.988030 kubelet[2532]: I1008 19:29:17.987928 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1c80d9b7-0d35-4215-a6dc-bf743dd049e8" (UID: "1c80d9b7-0d35-4215-a6dc-bf743dd049e8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:29:17.988030 kubelet[2532]: I1008 19:29:17.988000 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1c80d9b7-0d35-4215-a6dc-bf743dd049e8" (UID: "1c80d9b7-0d35-4215-a6dc-bf743dd049e8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:29:17.990957 kubelet[2532]: I1008 19:29:17.990923 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1c80d9b7-0d35-4215-a6dc-bf743dd049e8" (UID: "1c80d9b7-0d35-4215-a6dc-bf743dd049e8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 19:29:17.991703 kubelet[2532]: I1008 19:29:17.991661 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-kube-api-access-jqj8c" (OuterVolumeSpecName: "kube-api-access-jqj8c") pod "1c80d9b7-0d35-4215-a6dc-bf743dd049e8" (UID: "1c80d9b7-0d35-4215-a6dc-bf743dd049e8"). InnerVolumeSpecName "kube-api-access-jqj8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:29:17.992297 kubelet[2532]: I1008 19:29:17.992258 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1c80d9b7-0d35-4215-a6dc-bf743dd049e8" (UID: "1c80d9b7-0d35-4215-a6dc-bf743dd049e8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:29:17.992580 kubelet[2532]: I1008 19:29:17.992548 2532 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f4d201c-7278-4224-972c-05bdd7248a44-kube-api-access-x4jxr" (OuterVolumeSpecName: "kube-api-access-x4jxr") pod "5f4d201c-7278-4224-972c-05bdd7248a44" (UID: "5f4d201c-7278-4224-972c-05bdd7248a44"). InnerVolumeSpecName "kube-api-access-x4jxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:29:18.081846 kubelet[2532]: I1008 19:29:18.081729 2532 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.081846 kubelet[2532]: I1008 19:29:18.081765 2532 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.081846 kubelet[2532]: I1008 19:29:18.081779 2532 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.081846 kubelet[2532]: I1008 19:29:18.081804 2532 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f4d201c-7278-4224-972c-05bdd7248a44-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.081846 kubelet[2532]: I1008 19:29:18.081815 2532 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.081846 kubelet[2532]: I1008 19:29:18.081825 2532 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.081846 kubelet[2532]: I1008 19:29:18.081834 2532 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jqj8c\" (UniqueName: \"kubernetes.io/projected/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-kube-api-access-jqj8c\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.081846 kubelet[2532]: I1008 19:29:18.081844 2532 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.082104 kubelet[2532]: I1008 19:29:18.081853 2532 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.082104 kubelet[2532]: I1008 19:29:18.081862 2532 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.082104 kubelet[2532]: I1008 19:29:18.081871 2532 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.082104 kubelet[2532]: I1008 19:29:18.081880 2532 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.082104 kubelet[2532]: I1008 19:29:18.081888 2532 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.082104 kubelet[2532]: I1008 19:29:18.081897 2532 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.082104 kubelet[2532]: I1008 19:29:18.081907 2532 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c80d9b7-0d35-4215-a6dc-bf743dd049e8-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.082104 kubelet[2532]: I1008 19:29:18.081917 2532 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-x4jxr\" (UniqueName: \"kubernetes.io/projected/5f4d201c-7278-4224-972c-05bdd7248a44-kube-api-access-x4jxr\") on node \"localhost\" DevicePath \"\"" Oct 8 19:29:18.520936 systemd[1]: Removed slice kubepods-burstable-pod1c80d9b7_0d35_4215_a6dc_bf743dd049e8.slice - libcontainer container kubepods-burstable-pod1c80d9b7_0d35_4215_a6dc_bf743dd049e8.slice. Oct 8 19:29:18.521066 systemd[1]: kubepods-burstable-pod1c80d9b7_0d35_4215_a6dc_bf743dd049e8.slice: Consumed 6.691s CPU time. Oct 8 19:29:18.522892 systemd[1]: Removed slice kubepods-besteffort-pod5f4d201c_7278_4224_972c_05bdd7248a44.slice - libcontainer container kubepods-besteffort-pod5f4d201c_7278_4224_972c_05bdd7248a44.slice. Oct 8 19:29:18.523730 kubelet[2532]: I1008 19:29:18.523687 2532 scope.go:117] "RemoveContainer" containerID="c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5" Oct 8 19:29:18.525579 containerd[1447]: time="2024-10-08T19:29:18.525403097Z" level=info msg="RemoveContainer for \"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5\"" Oct 8 19:29:18.572762 containerd[1447]: time="2024-10-08T19:29:18.572628150Z" level=info msg="RemoveContainer for \"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5\" returns successfully" Oct 8 19:29:18.572979 kubelet[2532]: I1008 19:29:18.572931 2532 scope.go:117] "RemoveContainer" containerID="0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357" Oct 8 19:29:18.574295 containerd[1447]: time="2024-10-08T19:29:18.574103438Z" level=info msg="RemoveContainer for \"0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357\"" Oct 8 19:29:18.593787 containerd[1447]: time="2024-10-08T19:29:18.593697292Z" level=info msg="RemoveContainer for \"0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357\" returns successfully" Oct 8 19:29:18.594208 kubelet[2532]: I1008 19:29:18.594161 2532 scope.go:117] "RemoveContainer" containerID="b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d" Oct 8 19:29:18.597139 containerd[1447]: time="2024-10-08T19:29:18.597104898Z" level=info msg="RemoveContainer for \"b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d\"" Oct 8 19:29:18.608541 containerd[1447]: time="2024-10-08T19:29:18.608445611Z" level=info msg="RemoveContainer for \"b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d\" returns successfully" Oct 8 19:29:18.608683 kubelet[2532]: I1008 19:29:18.608653 2532 scope.go:117] "RemoveContainer" containerID="bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042" Oct 8 19:29:18.609764 containerd[1447]: time="2024-10-08T19:29:18.609520508Z" level=info msg="RemoveContainer for \"bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042\"" Oct 8 19:29:18.612192 containerd[1447]: time="2024-10-08T19:29:18.612112891Z" level=info msg="RemoveContainer for \"bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042\" returns successfully" Oct 8 19:29:18.612447 kubelet[2532]: I1008 19:29:18.612419 2532 scope.go:117] "RemoveContainer" containerID="71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338" Oct 8 19:29:18.614044 containerd[1447]: time="2024-10-08T19:29:18.614018410Z" level=info msg="RemoveContainer for \"71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338\"" Oct 8 19:29:18.616494 containerd[1447]: time="2024-10-08T19:29:18.616463677Z" level=info msg="RemoveContainer for \"71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338\" returns successfully" Oct 8 19:29:18.616765 kubelet[2532]: I1008 19:29:18.616735 2532 scope.go:117] "RemoveContainer" containerID="c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5" Oct 8 19:29:18.621474 containerd[1447]: time="2024-10-08T19:29:18.616941386Z" level=error msg="ContainerStatus for \"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5\": not found" Oct 8 19:29:18.626454 kubelet[2532]: E1008 19:29:18.626423 2532 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5\": not found" containerID="c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5" Oct 8 19:29:18.629961 kubelet[2532]: I1008 19:29:18.629929 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5"} err="failed to get container status \"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7aab23717e1ea25ffefe83f4e0ccd871e32a128bf988a21361b1f265896e8d5\": not found" Oct 8 19:29:18.630016 kubelet[2532]: I1008 19:29:18.629968 2532 scope.go:117] "RemoveContainer" containerID="0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357" Oct 8 19:29:18.630242 containerd[1447]: time="2024-10-08T19:29:18.630207418Z" level=error msg="ContainerStatus for \"0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357\": not found" Oct 8 19:29:18.630428 kubelet[2532]: E1008 19:29:18.630408 2532 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357\": not found" containerID="0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357" Oct 8 19:29:18.630460 kubelet[2532]: I1008 19:29:18.630446 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357"} err="failed to get container status \"0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357\": rpc error: code = NotFound desc = an error occurred when try to find container \"0910642c1b1a5b9afd1c2391d0f3e4ea8eeae2bf72f2676b42779edf07106357\": not found" Oct 8 19:29:18.630487 kubelet[2532]: I1008 19:29:18.630463 2532 scope.go:117] "RemoveContainer" containerID="b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d" Oct 8 19:29:18.630782 containerd[1447]: time="2024-10-08T19:29:18.630750086Z" level=error msg="ContainerStatus for \"b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d\": not found" Oct 8 19:29:18.630898 kubelet[2532]: E1008 19:29:18.630884 2532 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d\": not found" containerID="b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d" Oct 8 19:29:18.630932 kubelet[2532]: I1008 19:29:18.630915 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d"} err="failed to get container status \"b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d\": rpc error: code = NotFound desc = an error occurred when try to find container \"b85f324b18c4846a6f6c7c0299e056d1651166650938bb6f022924b81e10d10d\": not found" Oct 8 19:29:18.630932 kubelet[2532]: I1008 19:29:18.630927 2532 scope.go:117] "RemoveContainer" containerID="bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042" Oct 8 19:29:18.631100 containerd[1447]: time="2024-10-08T19:29:18.631073999Z" level=error msg="ContainerStatus for \"bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042\": not found" Oct 8 19:29:18.631221 kubelet[2532]: E1008 19:29:18.631203 2532 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042\": not found" containerID="bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042" Oct 8 19:29:18.631245 kubelet[2532]: I1008 19:29:18.631237 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042"} err="failed to get container status \"bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf64042e66926d4efe636afa9cd3d8aa9d4fb15d18aba256f459e2ae03728042\": not found" Oct 8 19:29:18.631271 kubelet[2532]: I1008 19:29:18.631249 2532 scope.go:117] "RemoveContainer" containerID="71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338" Oct 8 19:29:18.631426 containerd[1447]: time="2024-10-08T19:29:18.631397952Z" level=error msg="ContainerStatus for \"71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338\": not found" Oct 8 19:29:18.631705 kubelet[2532]: E1008 19:29:18.631692 2532 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338\": not found" containerID="71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338" Oct 8 19:29:18.631739 kubelet[2532]: I1008 19:29:18.631719 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338"} err="failed to get container status \"71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338\": rpc error: code = NotFound desc = an error occurred when try to find container \"71d5e326b120c1259a9747ee58e583ed0a8f96b1a92e45e0dfdde60999d9f338\": not found" Oct 8 19:29:18.631739 kubelet[2532]: I1008 19:29:18.631732 2532 scope.go:117] "RemoveContainer" containerID="8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d" Oct 8 19:29:18.632891 containerd[1447]: time="2024-10-08T19:29:18.632851720Z" level=info msg="RemoveContainer for \"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d\"" Oct 8 19:29:18.635186 containerd[1447]: time="2024-10-08T19:29:18.635152350Z" level=info msg="RemoveContainer for \"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d\" returns successfully" Oct 8 19:29:18.635346 kubelet[2532]: I1008 19:29:18.635314 2532 scope.go:117] "RemoveContainer" containerID="8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d" Oct 8 19:29:18.635560 containerd[1447]: time="2024-10-08T19:29:18.635520902Z" level=error msg="ContainerStatus for \"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d\": not found" Oct 8 19:29:18.635668 kubelet[2532]: E1008 19:29:18.635654 2532 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d\": not found" containerID="8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d" Oct 8 19:29:18.635699 kubelet[2532]: I1008 19:29:18.635690 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d"} err="failed to get container status \"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d\": rpc error: code = NotFound desc = an error occurred when try to find container \"8def1f7d75c2b2c1bc62869fe1c2ca9784a25cdc020e1405073bf8dc4dfb217d\": not found" Oct 8 19:29:18.766534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2f858a9003f0d7bfb0395c0d0df1fb2a9c249e66414f7d60aeb6b160d36934c-rootfs.mount: Deactivated successfully. Oct 8 19:29:18.766640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47997498604e58b20101ed5dac0864a18dc78cd59b5dd89ea93a000cd447053a-rootfs.mount: Deactivated successfully. Oct 8 19:29:18.766691 systemd[1]: var-lib-kubelet-pods-5f4d201c\x2d7278\x2d4224\x2d972c\x2d05bdd7248a44-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx4jxr.mount: Deactivated successfully. Oct 8 19:29:18.766743 systemd[1]: var-lib-kubelet-pods-1c80d9b7\x2d0d35\x2d4215\x2da6dc\x2dbf743dd049e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djqj8c.mount: Deactivated successfully. Oct 8 19:29:18.766825 systemd[1]: var-lib-kubelet-pods-1c80d9b7\x2d0d35\x2d4215\x2da6dc\x2dbf743dd049e8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 8 19:29:18.766880 systemd[1]: var-lib-kubelet-pods-1c80d9b7\x2d0d35\x2d4215\x2da6dc\x2dbf743dd049e8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 8 19:29:19.330452 kubelet[2532]: I1008 19:29:19.330410 2532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1c80d9b7-0d35-4215-a6dc-bf743dd049e8" path="/var/lib/kubelet/pods/1c80d9b7-0d35-4215-a6dc-bf743dd049e8/volumes" Oct 8 19:29:19.331052 kubelet[2532]: I1008 19:29:19.331023 2532 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5f4d201c-7278-4224-972c-05bdd7248a44" path="/var/lib/kubelet/pods/5f4d201c-7278-4224-972c-05bdd7248a44/volumes" Oct 8 19:29:19.681524 sshd[4161]: pam_unix(sshd:session): session closed for user core Oct 8 19:29:19.694472 systemd[1]: sshd@22-10.0.0.22:22-10.0.0.1:45252.service: Deactivated successfully. Oct 8 19:29:19.696072 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 19:29:19.696219 systemd[1]: session-23.scope: Consumed 1.153s CPU time. Oct 8 19:29:19.697241 systemd-logind[1425]: Session 23 logged out. Waiting for processes to exit. Oct 8 19:29:19.698476 systemd[1]: Started sshd@23-10.0.0.22:22-10.0.0.1:45254.service - OpenSSH per-connection server daemon (10.0.0.1:45254). Oct 8 19:29:19.699228 systemd-logind[1425]: Removed session 23. Oct 8 19:29:19.738779 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 45254 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:29:19.740882 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:19.745087 systemd-logind[1425]: New session 24 of user core. Oct 8 19:29:19.754987 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 19:29:20.499143 sshd[4322]: pam_unix(sshd:session): session closed for user core Oct 8 19:29:20.508132 kubelet[2532]: I1008 19:29:20.508096 2532 topology_manager.go:215] "Topology Admit Handler" podUID="39065929-47bc-47a7-8c70-6d5b62aec86f" podNamespace="kube-system" podName="cilium-wxxdj" Oct 8 19:29:20.511583 kubelet[2532]: E1008 19:29:20.509574 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1c80d9b7-0d35-4215-a6dc-bf743dd049e8" containerName="mount-cgroup" Oct 8 19:29:20.511583 kubelet[2532]: E1008 19:29:20.509600 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1c80d9b7-0d35-4215-a6dc-bf743dd049e8" containerName="mount-bpf-fs" Oct 8 19:29:20.511583 kubelet[2532]: E1008 19:29:20.509609 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1c80d9b7-0d35-4215-a6dc-bf743dd049e8" containerName="clean-cilium-state" Oct 8 19:29:20.511583 kubelet[2532]: E1008 19:29:20.509615 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1c80d9b7-0d35-4215-a6dc-bf743dd049e8" containerName="cilium-agent" Oct 8 19:29:20.511583 kubelet[2532]: E1008 19:29:20.509622 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1c80d9b7-0d35-4215-a6dc-bf743dd049e8" containerName="apply-sysctl-overwrites" Oct 8 19:29:20.511583 kubelet[2532]: E1008 19:29:20.509629 2532 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5f4d201c-7278-4224-972c-05bdd7248a44" containerName="cilium-operator" Oct 8 19:29:20.511583 kubelet[2532]: I1008 19:29:20.509654 2532 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c80d9b7-0d35-4215-a6dc-bf743dd049e8" containerName="cilium-agent" Oct 8 19:29:20.511583 kubelet[2532]: I1008 19:29:20.509660 2532 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f4d201c-7278-4224-972c-05bdd7248a44" containerName="cilium-operator" Oct 8 19:29:20.510512 systemd[1]: sshd@23-10.0.0.22:22-10.0.0.1:45254.service: Deactivated successfully. Oct 8 19:29:20.516286 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 19:29:20.524162 systemd-logind[1425]: Session 24 logged out. Waiting for processes to exit. Oct 8 19:29:20.537181 systemd[1]: Started sshd@24-10.0.0.22:22-10.0.0.1:45264.service - OpenSSH per-connection server daemon (10.0.0.1:45264). Oct 8 19:29:20.541036 systemd-logind[1425]: Removed session 24. Oct 8 19:29:20.548488 systemd[1]: Created slice kubepods-burstable-pod39065929_47bc_47a7_8c70_6d5b62aec86f.slice - libcontainer container kubepods-burstable-pod39065929_47bc_47a7_8c70_6d5b62aec86f.slice. Oct 8 19:29:20.571507 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 45264 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:29:20.572715 sshd[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:20.576265 systemd-logind[1425]: New session 25 of user core. Oct 8 19:29:20.584045 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 19:29:20.596315 kubelet[2532]: I1008 19:29:20.596280 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39065929-47bc-47a7-8c70-6d5b62aec86f-lib-modules\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.596418 kubelet[2532]: I1008 19:29:20.596326 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/39065929-47bc-47a7-8c70-6d5b62aec86f-cilium-ipsec-secrets\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.596418 kubelet[2532]: I1008 19:29:20.596349 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkscj\" (UniqueName: \"kubernetes.io/projected/39065929-47bc-47a7-8c70-6d5b62aec86f-kube-api-access-qkscj\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.596418 kubelet[2532]: I1008 19:29:20.596370 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39065929-47bc-47a7-8c70-6d5b62aec86f-cilium-cgroup\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.596418 kubelet[2532]: I1008 19:29:20.596390 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39065929-47bc-47a7-8c70-6d5b62aec86f-host-proc-sys-kernel\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.596418 kubelet[2532]: I1008 19:29:20.596408 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39065929-47bc-47a7-8c70-6d5b62aec86f-hubble-tls\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.596540 kubelet[2532]: I1008 19:29:20.596427 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39065929-47bc-47a7-8c70-6d5b62aec86f-cni-path\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.596540 kubelet[2532]: I1008 19:29:20.596454 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39065929-47bc-47a7-8c70-6d5b62aec86f-host-proc-sys-net\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.596540 kubelet[2532]: I1008 19:29:20.596473 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39065929-47bc-47a7-8c70-6d5b62aec86f-bpf-maps\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.596540 kubelet[2532]: I1008 19:29:20.596492 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39065929-47bc-47a7-8c70-6d5b62aec86f-cilium-config-path\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.596540 kubelet[2532]: I1008 19:29:20.596513 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39065929-47bc-47a7-8c70-6d5b62aec86f-clustermesh-secrets\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.596540 kubelet[2532]: I1008 19:29:20.596530 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39065929-47bc-47a7-8c70-6d5b62aec86f-hostproc\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.596660 kubelet[2532]: I1008 19:29:20.596547 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39065929-47bc-47a7-8c70-6d5b62aec86f-etc-cni-netd\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.596660 kubelet[2532]: I1008 19:29:20.596566 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39065929-47bc-47a7-8c70-6d5b62aec86f-xtables-lock\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.596660 kubelet[2532]: I1008 19:29:20.596584 2532 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39065929-47bc-47a7-8c70-6d5b62aec86f-cilium-run\") pod \"cilium-wxxdj\" (UID: \"39065929-47bc-47a7-8c70-6d5b62aec86f\") " pod="kube-system/cilium-wxxdj" Oct 8 19:29:20.637086 sshd[4335]: pam_unix(sshd:session): session closed for user core Oct 8 19:29:20.647323 systemd[1]: sshd@24-10.0.0.22:22-10.0.0.1:45264.service: Deactivated successfully. Oct 8 19:29:20.648979 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 19:29:20.650524 systemd-logind[1425]: Session 25 logged out. Waiting for processes to exit. Oct 8 19:29:20.651727 systemd[1]: Started sshd@25-10.0.0.22:22-10.0.0.1:45276.service - OpenSSH per-connection server daemon (10.0.0.1:45276). Oct 8 19:29:20.652467 systemd-logind[1425]: Removed session 25. Oct 8 19:29:20.685403 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 45276 ssh2: RSA SHA256:7GlzoUcthdqM2/gWbc3rpA5Lm+7Qkd3pe7wSn/JGGIM Oct 8 19:29:20.686639 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:29:20.690388 systemd-logind[1425]: New session 26 of user core. Oct 8 19:29:20.699576 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 19:29:20.851969 kubelet[2532]: E1008 19:29:20.851038 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:29:20.852650 containerd[1447]: time="2024-10-08T19:29:20.851507693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wxxdj,Uid:39065929-47bc-47a7-8c70-6d5b62aec86f,Namespace:kube-system,Attempt:0,}" Oct 8 19:29:20.870435 containerd[1447]: time="2024-10-08T19:29:20.870348515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:29:20.870435 containerd[1447]: time="2024-10-08T19:29:20.870408594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:29:20.870435 containerd[1447]: time="2024-10-08T19:29:20.870426874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:29:20.870435 containerd[1447]: time="2024-10-08T19:29:20.870440433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:29:20.888031 systemd[1]: Started cri-containerd-53164d9ceb7e57bd2acd74af05e69efbb54bb1d579d4265b628f357fca0d6895.scope - libcontainer container 53164d9ceb7e57bd2acd74af05e69efbb54bb1d579d4265b628f357fca0d6895. Oct 8 19:29:20.906414 containerd[1447]: time="2024-10-08T19:29:20.906369829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wxxdj,Uid:39065929-47bc-47a7-8c70-6d5b62aec86f,Namespace:kube-system,Attempt:0,} returns sandbox id \"53164d9ceb7e57bd2acd74af05e69efbb54bb1d579d4265b628f357fca0d6895\"" Oct 8 19:29:20.907429 kubelet[2532]: E1008 19:29:20.907407 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:29:20.911270 containerd[1447]: time="2024-10-08T19:29:20.911153183Z" level=info msg="CreateContainer within sandbox \"53164d9ceb7e57bd2acd74af05e69efbb54bb1d579d4265b628f357fca0d6895\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 8 19:29:20.922231 containerd[1447]: time="2024-10-08T19:29:20.922006149Z" level=info msg="CreateContainer within sandbox \"53164d9ceb7e57bd2acd74af05e69efbb54bb1d579d4265b628f357fca0d6895\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"36c834438f85cf9859f70453419adc4b51b353883b6777faca20cd98edd52a30\"" Oct 8 19:29:20.922594 containerd[1447]: time="2024-10-08T19:29:20.922572218Z" level=info msg="StartContainer for \"36c834438f85cf9859f70453419adc4b51b353883b6777faca20cd98edd52a30\"" Oct 8 19:29:20.954019 systemd[1]: Started cri-containerd-36c834438f85cf9859f70453419adc4b51b353883b6777faca20cd98edd52a30.scope - libcontainer container 36c834438f85cf9859f70453419adc4b51b353883b6777faca20cd98edd52a30. Oct 8 19:29:20.977101 containerd[1447]: time="2024-10-08T19:29:20.977059961Z" level=info msg="StartContainer for \"36c834438f85cf9859f70453419adc4b51b353883b6777faca20cd98edd52a30\" returns successfully" Oct 8 19:29:20.990705 systemd[1]: cri-containerd-36c834438f85cf9859f70453419adc4b51b353883b6777faca20cd98edd52a30.scope: Deactivated successfully. Oct 8 19:29:21.025756 containerd[1447]: time="2024-10-08T19:29:21.025538937Z" level=info msg="shim disconnected" id=36c834438f85cf9859f70453419adc4b51b353883b6777faca20cd98edd52a30 namespace=k8s.io Oct 8 19:29:21.025756 containerd[1447]: time="2024-10-08T19:29:21.025639535Z" level=warning msg="cleaning up after shim disconnected" id=36c834438f85cf9859f70453419adc4b51b353883b6777faca20cd98edd52a30 namespace=k8s.io Oct 8 19:29:21.025756 containerd[1447]: time="2024-10-08T19:29:21.025649095Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:29:21.390239 kubelet[2532]: E1008 19:29:21.390210 2532 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 8 19:29:21.526522 kubelet[2532]: E1008 19:29:21.526408 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:29:21.529568 containerd[1447]: time="2024-10-08T19:29:21.529451133Z" level=info msg="CreateContainer within sandbox \"53164d9ceb7e57bd2acd74af05e69efbb54bb1d579d4265b628f357fca0d6895\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 8 19:29:21.550042 containerd[1447]: time="2024-10-08T19:29:21.549986242Z" level=info msg="CreateContainer within sandbox \"53164d9ceb7e57bd2acd74af05e69efbb54bb1d579d4265b628f357fca0d6895\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d5378da113a207c3f0736910847ee688dc5615a17f1f72b2ff181dfb79c76ca2\"" Oct 8 19:29:21.550576 containerd[1447]: time="2024-10-08T19:29:21.550541193Z" level=info msg="StartContainer for \"d5378da113a207c3f0736910847ee688dc5615a17f1f72b2ff181dfb79c76ca2\"" Oct 8 19:29:21.575967 systemd[1]: Started cri-containerd-d5378da113a207c3f0736910847ee688dc5615a17f1f72b2ff181dfb79c76ca2.scope - libcontainer container d5378da113a207c3f0736910847ee688dc5615a17f1f72b2ff181dfb79c76ca2. Oct 8 19:29:21.596821 containerd[1447]: time="2024-10-08T19:29:21.596243777Z" level=info msg="StartContainer for \"d5378da113a207c3f0736910847ee688dc5615a17f1f72b2ff181dfb79c76ca2\" returns successfully" Oct 8 19:29:21.606391 systemd[1]: cri-containerd-d5378da113a207c3f0736910847ee688dc5615a17f1f72b2ff181dfb79c76ca2.scope: Deactivated successfully. Oct 8 19:29:21.628043 containerd[1447]: time="2024-10-08T19:29:21.627988945Z" level=info msg="shim disconnected" id=d5378da113a207c3f0736910847ee688dc5615a17f1f72b2ff181dfb79c76ca2 namespace=k8s.io Oct 8 19:29:21.628405 containerd[1447]: time="2024-10-08T19:29:21.628238381Z" level=warning msg="cleaning up after shim disconnected" id=d5378da113a207c3f0736910847ee688dc5615a17f1f72b2ff181dfb79c76ca2 namespace=k8s.io Oct 8 19:29:21.628405 containerd[1447]: time="2024-10-08T19:29:21.628254621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:29:22.328494 kubelet[2532]: E1008 19:29:22.328456 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:29:22.529508 kubelet[2532]: E1008 19:29:22.529448 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:29:22.532054 containerd[1447]: time="2024-10-08T19:29:22.531849828Z" level=info msg="CreateContainer within sandbox \"53164d9ceb7e57bd2acd74af05e69efbb54bb1d579d4265b628f357fca0d6895\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 8 19:29:22.545829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2945199133.mount: Deactivated successfully. Oct 8 19:29:22.548448 containerd[1447]: time="2024-10-08T19:29:22.548395070Z" level=info msg="CreateContainer within sandbox \"53164d9ceb7e57bd2acd74af05e69efbb54bb1d579d4265b628f357fca0d6895\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a0ef3bd680f675fac8687582307370681a1ec2dc5e4362c7f1f04725aeef468a\"" Oct 8 19:29:22.548924 containerd[1447]: time="2024-10-08T19:29:22.548873503Z" level=info msg="StartContainer for \"a0ef3bd680f675fac8687582307370681a1ec2dc5e4362c7f1f04725aeef468a\"" Oct 8 19:29:22.579092 systemd[1]: Started cri-containerd-a0ef3bd680f675fac8687582307370681a1ec2dc5e4362c7f1f04725aeef468a.scope - libcontainer container a0ef3bd680f675fac8687582307370681a1ec2dc5e4362c7f1f04725aeef468a. Oct 8 19:29:22.603474 containerd[1447]: time="2024-10-08T19:29:22.603426040Z" level=info msg="StartContainer for \"a0ef3bd680f675fac8687582307370681a1ec2dc5e4362c7f1f04725aeef468a\" returns successfully" Oct 8 19:29:22.604606 systemd[1]: cri-containerd-a0ef3bd680f675fac8687582307370681a1ec2dc5e4362c7f1f04725aeef468a.scope: Deactivated successfully. Oct 8 19:29:22.629031 containerd[1447]: time="2024-10-08T19:29:22.628955673Z" level=info msg="shim disconnected" id=a0ef3bd680f675fac8687582307370681a1ec2dc5e4362c7f1f04725aeef468a namespace=k8s.io Oct 8 19:29:22.629031 containerd[1447]: time="2024-10-08T19:29:22.629015672Z" level=warning msg="cleaning up after shim disconnected" id=a0ef3bd680f675fac8687582307370681a1ec2dc5e4362c7f1f04725aeef468a namespace=k8s.io Oct 8 19:29:22.629031 containerd[1447]: time="2024-10-08T19:29:22.629024352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:29:22.701774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0ef3bd680f675fac8687582307370681a1ec2dc5e4362c7f1f04725aeef468a-rootfs.mount: Deactivated successfully. Oct 8 19:29:22.968179 kubelet[2532]: I1008 19:29:22.967304 2532 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-08T19:29:22Z","lastTransitionTime":"2024-10-08T19:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 8 19:29:23.545364 kubelet[2532]: E1008 19:29:23.544903 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:29:23.546884 containerd[1447]: time="2024-10-08T19:29:23.546844379Z" level=info msg="CreateContainer within sandbox \"53164d9ceb7e57bd2acd74af05e69efbb54bb1d579d4265b628f357fca0d6895\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 8 19:29:23.561671 containerd[1447]: time="2024-10-08T19:29:23.561539913Z" level=info msg="CreateContainer within sandbox \"53164d9ceb7e57bd2acd74af05e69efbb54bb1d579d4265b628f357fca0d6895\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8049031f01b2f85359e814d112daa1cf7fa2be06bdd77975bf1c1d7e26ebdeb8\"" Oct 8 19:29:23.562396 containerd[1447]: time="2024-10-08T19:29:23.562308783Z" level=info msg="StartContainer for \"8049031f01b2f85359e814d112daa1cf7fa2be06bdd77975bf1c1d7e26ebdeb8\"" Oct 8 19:29:23.565005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2176707746.mount: Deactivated successfully. Oct 8 19:29:23.592983 systemd[1]: Started cri-containerd-8049031f01b2f85359e814d112daa1cf7fa2be06bdd77975bf1c1d7e26ebdeb8.scope - libcontainer container 8049031f01b2f85359e814d112daa1cf7fa2be06bdd77975bf1c1d7e26ebdeb8. Oct 8 19:29:23.612118 systemd[1]: cri-containerd-8049031f01b2f85359e814d112daa1cf7fa2be06bdd77975bf1c1d7e26ebdeb8.scope: Deactivated successfully. Oct 8 19:29:23.615857 containerd[1447]: time="2024-10-08T19:29:23.614577041Z" level=info msg="StartContainer for \"8049031f01b2f85359e814d112daa1cf7fa2be06bdd77975bf1c1d7e26ebdeb8\" returns successfully" Oct 8 19:29:23.635082 containerd[1447]: time="2024-10-08T19:29:23.635026342Z" level=info msg="shim disconnected" id=8049031f01b2f85359e814d112daa1cf7fa2be06bdd77975bf1c1d7e26ebdeb8 namespace=k8s.io Oct 8 19:29:23.635082 containerd[1447]: time="2024-10-08T19:29:23.635076862Z" level=warning msg="cleaning up after shim disconnected" id=8049031f01b2f85359e814d112daa1cf7fa2be06bdd77975bf1c1d7e26ebdeb8 namespace=k8s.io Oct 8 19:29:23.635082 containerd[1447]: time="2024-10-08T19:29:23.635085182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:29:23.645713 containerd[1447]: time="2024-10-08T19:29:23.645661208Z" level=warning msg="cleanup warnings time=\"2024-10-08T19:29:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 19:29:23.701870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8049031f01b2f85359e814d112daa1cf7fa2be06bdd77975bf1c1d7e26ebdeb8-rootfs.mount: Deactivated successfully. Oct 8 19:29:24.549298 kubelet[2532]: E1008 19:29:24.549115 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:29:24.552179 containerd[1447]: time="2024-10-08T19:29:24.552132202Z" level=info msg="CreateContainer within sandbox \"53164d9ceb7e57bd2acd74af05e69efbb54bb1d579d4265b628f357fca0d6895\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 8 19:29:24.565312 containerd[1447]: time="2024-10-08T19:29:24.565254978Z" level=info msg="CreateContainer within sandbox \"53164d9ceb7e57bd2acd74af05e69efbb54bb1d579d4265b628f357fca0d6895\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"156d51e7b3f4eeb8d665b2b003daabe1aee1826abd3847d2609c46ebecbb09bd\"" Oct 8 19:29:24.565809 containerd[1447]: time="2024-10-08T19:29:24.565710253Z" level=info msg="StartContainer for \"156d51e7b3f4eeb8d665b2b003daabe1aee1826abd3847d2609c46ebecbb09bd\"" Oct 8 19:29:24.566390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3680336030.mount: Deactivated successfully. Oct 8 19:29:24.596984 systemd[1]: Started cri-containerd-156d51e7b3f4eeb8d665b2b003daabe1aee1826abd3847d2609c46ebecbb09bd.scope - libcontainer container 156d51e7b3f4eeb8d665b2b003daabe1aee1826abd3847d2609c46ebecbb09bd. Oct 8 19:29:24.622352 containerd[1447]: time="2024-10-08T19:29:24.622259110Z" level=info msg="StartContainer for \"156d51e7b3f4eeb8d665b2b003daabe1aee1826abd3847d2609c46ebecbb09bd\" returns successfully" Oct 8 19:29:24.701993 systemd[1]: run-containerd-runc-k8s.io-156d51e7b3f4eeb8d665b2b003daabe1aee1826abd3847d2609c46ebecbb09bd-runc.bkoql6.mount: Deactivated successfully. Oct 8 19:29:24.884819 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Oct 8 19:29:25.553263 kubelet[2532]: E1008 19:29:25.553222 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:29:26.854128 kubelet[2532]: E1008 19:29:26.853290 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:29:27.682365 systemd-networkd[1365]: lxc_health: Link UP Oct 8 19:29:27.690277 systemd-networkd[1365]: lxc_health: Gained carrier Oct 8 19:29:28.854135 kubelet[2532]: E1008 19:29:28.854091 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:29:28.881142 kubelet[2532]: I1008 19:29:28.880820 2532 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-wxxdj" podStartSLOduration=8.880755665 podStartE2EDuration="8.880755665s" podCreationTimestamp="2024-10-08 19:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:29:25.566584578 +0000 UTC m=+84.330817379" watchObservedRunningTime="2024-10-08 19:29:28.880755665 +0000 UTC m=+87.644988466" Oct 8 19:29:29.167871 systemd[1]: run-containerd-runc-k8s.io-156d51e7b3f4eeb8d665b2b003daabe1aee1826abd3847d2609c46ebecbb09bd-runc.rbp28f.mount: Deactivated successfully. Oct 8 19:29:29.328762 systemd-networkd[1365]: lxc_health: Gained IPv6LL Oct 8 19:29:29.566912 kubelet[2532]: E1008 19:29:29.566781 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:29:30.568321 kubelet[2532]: E1008 19:29:30.568151 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:29:32.328284 kubelet[2532]: E1008 19:29:32.328246 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 8 19:29:33.469690 sshd[4343]: pam_unix(sshd:session): session closed for user core Oct 8 19:29:33.473127 systemd[1]: sshd@25-10.0.0.22:22-10.0.0.1:45276.service: Deactivated successfully. Oct 8 19:29:33.475037 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 19:29:33.475654 systemd-logind[1425]: Session 26 logged out. Waiting for processes to exit. Oct 8 19:29:33.476580 systemd-logind[1425]: Removed session 26.