Oct 9 00:42:09.885797 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 9 00:42:09.885817 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Oct 8 23:34:40 -00 2024 Oct 9 00:42:09.885826 kernel: KASLR enabled Oct 9 00:42:09.885832 kernel: efi: EFI v2.7 by EDK II Oct 9 00:42:09.885838 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 9 00:42:09.885843 kernel: random: crng init done Oct 9 00:42:09.885850 kernel: secureboot: Secure boot disabled Oct 9 00:42:09.885856 kernel: ACPI: Early table checksum verification disabled Oct 9 00:42:09.885862 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 9 00:42:09.885869 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 9 00:42:09.885875 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:42:09.885888 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:42:09.885894 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:42:09.885900 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:42:09.885908 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:42:09.885915 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:42:09.885922 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:42:09.885928 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:42:09.885934 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:42:09.885940 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 9 00:42:09.885946 kernel: NUMA: Failed to initialise from firmware Oct 9 00:42:09.885953 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 00:42:09.885959 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Oct 9 00:42:09.885965 kernel: Zone ranges: Oct 9 00:42:09.885971 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 00:42:09.885978 kernel: DMA32 empty Oct 9 00:42:09.885984 kernel: Normal empty Oct 9 00:42:09.885990 kernel: Movable zone start for each node Oct 9 00:42:09.885996 kernel: Early memory node ranges Oct 9 00:42:09.886002 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 9 00:42:09.886009 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 9 00:42:09.886015 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 9 00:42:09.886022 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 9 00:42:09.886028 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 9 00:42:09.886035 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 9 00:42:09.886041 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 9 00:42:09.886047 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 00:42:09.886068 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 9 00:42:09.886074 kernel: psci: probing for conduit method from ACPI. Oct 9 00:42:09.886080 kernel: psci: PSCIv1.1 detected in firmware. Oct 9 00:42:09.886090 kernel: psci: Using standard PSCI v0.2 function IDs Oct 9 00:42:09.886097 kernel: psci: Trusted OS migration not required Oct 9 00:42:09.886103 kernel: psci: SMC Calling Convention v1.1 Oct 9 00:42:09.886111 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 9 00:42:09.886118 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 9 00:42:09.886125 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 9 00:42:09.886132 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 9 00:42:09.886138 kernel: Detected PIPT I-cache on CPU0 Oct 9 00:42:09.886145 kernel: CPU features: detected: GIC system register CPU interface Oct 9 00:42:09.886152 kernel: CPU features: detected: Hardware dirty bit management Oct 9 00:42:09.886158 kernel: CPU features: detected: Spectre-v4 Oct 9 00:42:09.886165 kernel: CPU features: detected: Spectre-BHB Oct 9 00:42:09.886172 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 9 00:42:09.886180 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 9 00:42:09.886187 kernel: CPU features: detected: ARM erratum 1418040 Oct 9 00:42:09.886193 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 9 00:42:09.886200 kernel: alternatives: applying boot alternatives Oct 9 00:42:09.886207 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d2d67b5440410ae2d0aa86eba97891969be0a7a421fa55f13442706ef7ed2a5e Oct 9 00:42:09.886215 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 00:42:09.886221 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 00:42:09.886228 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 00:42:09.886235 kernel: Fallback order for Node 0: 0 Oct 9 00:42:09.886242 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 9 00:42:09.886248 kernel: Policy zone: DMA Oct 9 00:42:09.886256 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 00:42:09.886262 kernel: software IO TLB: area num 4. Oct 9 00:42:09.886269 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 9 00:42:09.886276 kernel: Memory: 2386400K/2572288K available (10240K kernel code, 2184K rwdata, 8092K rodata, 39552K init, 897K bss, 185888K reserved, 0K cma-reserved) Oct 9 00:42:09.886283 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 00:42:09.886289 kernel: trace event string verifier disabled Oct 9 00:42:09.886296 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 00:42:09.886303 kernel: rcu: RCU event tracing is enabled. Oct 9 00:42:09.886309 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 00:42:09.886316 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 00:42:09.886323 kernel: Tracing variant of Tasks RCU enabled. Oct 9 00:42:09.886329 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 00:42:09.886337 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 00:42:09.886344 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 9 00:42:09.886350 kernel: GICv3: 256 SPIs implemented Oct 9 00:42:09.886357 kernel: GICv3: 0 Extended SPIs implemented Oct 9 00:42:09.886387 kernel: Root IRQ handler: gic_handle_irq Oct 9 00:42:09.886395 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 9 00:42:09.886401 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 9 00:42:09.886408 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 9 00:42:09.886415 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 9 00:42:09.886421 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 9 00:42:09.886428 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 9 00:42:09.886436 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 9 00:42:09.886443 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 00:42:09.886450 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 00:42:09.886456 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 9 00:42:09.886463 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 9 00:42:09.886470 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 9 00:42:09.886476 kernel: arm-pv: using stolen time PV Oct 9 00:42:09.886483 kernel: Console: colour dummy device 80x25 Oct 9 00:42:09.886490 kernel: ACPI: Core revision 20230628 Oct 9 00:42:09.886497 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 9 00:42:09.886504 kernel: pid_max: default: 32768 minimum: 301 Oct 9 00:42:09.886512 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 00:42:09.886519 kernel: landlock: Up and running. Oct 9 00:42:09.886525 kernel: SELinux: Initializing. Oct 9 00:42:09.886532 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 00:42:09.886539 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 00:42:09.886546 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:42:09.886553 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:42:09.886559 kernel: rcu: Hierarchical SRCU implementation. Oct 9 00:42:09.886566 kernel: rcu: Max phase no-delay instances is 400. Oct 9 00:42:09.886574 kernel: Platform MSI: ITS@0x8080000 domain created Oct 9 00:42:09.886581 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 9 00:42:09.886587 kernel: Remapping and enabling EFI services. Oct 9 00:42:09.886594 kernel: smp: Bringing up secondary CPUs ... Oct 9 00:42:09.886601 kernel: Detected PIPT I-cache on CPU1 Oct 9 00:42:09.886607 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 9 00:42:09.886614 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 9 00:42:09.886621 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 00:42:09.886627 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 9 00:42:09.886635 kernel: Detected PIPT I-cache on CPU2 Oct 9 00:42:09.886642 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 9 00:42:09.886654 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 9 00:42:09.886662 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 00:42:09.886669 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 9 00:42:09.886676 kernel: Detected PIPT I-cache on CPU3 Oct 9 00:42:09.886683 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 9 00:42:09.886690 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 9 00:42:09.886697 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 00:42:09.886705 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 9 00:42:09.886712 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 00:42:09.886719 kernel: SMP: Total of 4 processors activated. Oct 9 00:42:09.886726 kernel: CPU features: detected: 32-bit EL0 Support Oct 9 00:42:09.886734 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 9 00:42:09.886741 kernel: CPU features: detected: Common not Private translations Oct 9 00:42:09.886748 kernel: CPU features: detected: CRC32 instructions Oct 9 00:42:09.886755 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 9 00:42:09.886763 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 9 00:42:09.886770 kernel: CPU features: detected: LSE atomic instructions Oct 9 00:42:09.886777 kernel: CPU features: detected: Privileged Access Never Oct 9 00:42:09.886784 kernel: CPU features: detected: RAS Extension Support Oct 9 00:42:09.886791 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 9 00:42:09.886798 kernel: CPU: All CPU(s) started at EL1 Oct 9 00:42:09.886805 kernel: alternatives: applying system-wide alternatives Oct 9 00:42:09.886816 kernel: devtmpfs: initialized Oct 9 00:42:09.886825 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 00:42:09.886833 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 00:42:09.886840 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 00:42:09.886847 kernel: SMBIOS 3.0.0 present. Oct 9 00:42:09.886854 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 9 00:42:09.886862 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 00:42:09.886869 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 9 00:42:09.886876 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 9 00:42:09.886888 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 9 00:42:09.886895 kernel: audit: initializing netlink subsys (disabled) Oct 9 00:42:09.886904 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Oct 9 00:42:09.886911 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 00:42:09.886919 kernel: cpuidle: using governor menu Oct 9 00:42:09.886926 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 9 00:42:09.886933 kernel: ASID allocator initialised with 32768 entries Oct 9 00:42:09.886940 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 00:42:09.886947 kernel: Serial: AMBA PL011 UART driver Oct 9 00:42:09.886954 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 9 00:42:09.886961 kernel: Modules: 0 pages in range for non-PLT usage Oct 9 00:42:09.886969 kernel: Modules: 508992 pages in range for PLT usage Oct 9 00:42:09.886977 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 00:42:09.886984 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 00:42:09.886991 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 9 00:42:09.886998 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 9 00:42:09.887005 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 00:42:09.887012 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 00:42:09.887019 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 9 00:42:09.887026 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 9 00:42:09.887035 kernel: ACPI: Added _OSI(Module Device) Oct 9 00:42:09.887042 kernel: ACPI: Added _OSI(Processor Device) Oct 9 00:42:09.887049 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 00:42:09.887056 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 00:42:09.887063 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 00:42:09.887070 kernel: ACPI: Interpreter enabled Oct 9 00:42:09.887077 kernel: ACPI: Using GIC for interrupt routing Oct 9 00:42:09.887084 kernel: ACPI: MCFG table detected, 1 entries Oct 9 00:42:09.887092 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 9 00:42:09.887099 kernel: printk: console [ttyAMA0] enabled Oct 9 00:42:09.887107 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 00:42:09.887232 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 00:42:09.887305 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 9 00:42:09.887432 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 9 00:42:09.887502 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 9 00:42:09.887566 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 9 00:42:09.887576 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 9 00:42:09.887586 kernel: PCI host bridge to bus 0000:00 Oct 9 00:42:09.887656 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 9 00:42:09.887714 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 9 00:42:09.887770 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 9 00:42:09.887826 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 00:42:09.887913 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 9 00:42:09.887993 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 00:42:09.888059 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 9 00:42:09.888125 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 9 00:42:09.888189 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 9 00:42:09.888253 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 9 00:42:09.888319 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 9 00:42:09.888401 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 9 00:42:09.888472 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 9 00:42:09.888535 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 9 00:42:09.888600 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 9 00:42:09.888612 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 9 00:42:09.888620 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 9 00:42:09.888627 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 9 00:42:09.888635 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 9 00:42:09.888643 kernel: iommu: Default domain type: Translated Oct 9 00:42:09.888656 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 9 00:42:09.888666 kernel: efivars: Registered efivars operations Oct 9 00:42:09.888673 kernel: vgaarb: loaded Oct 9 00:42:09.888680 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 9 00:42:09.888687 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 00:42:09.888694 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 00:42:09.888701 kernel: pnp: PnP ACPI init Oct 9 00:42:09.888800 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 9 00:42:09.888813 kernel: pnp: PnP ACPI: found 1 devices Oct 9 00:42:09.888820 kernel: NET: Registered PF_INET protocol family Oct 9 00:42:09.888828 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 00:42:09.888835 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 00:42:09.888843 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 00:42:09.888850 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 00:42:09.888857 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 00:42:09.888865 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 00:42:09.888872 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 00:42:09.888886 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 00:42:09.888894 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 00:42:09.888901 kernel: PCI: CLS 0 bytes, default 64 Oct 9 00:42:09.888909 kernel: kvm [1]: HYP mode not available Oct 9 00:42:09.888916 kernel: Initialise system trusted keyrings Oct 9 00:42:09.888923 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 00:42:09.888930 kernel: Key type asymmetric registered Oct 9 00:42:09.888938 kernel: Asymmetric key parser 'x509' registered Oct 9 00:42:09.888945 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 9 00:42:09.888954 kernel: io scheduler mq-deadline registered Oct 9 00:42:09.888961 kernel: io scheduler kyber registered Oct 9 00:42:09.888968 kernel: io scheduler bfq registered Oct 9 00:42:09.888976 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 9 00:42:09.888983 kernel: ACPI: button: Power Button [PWRB] Oct 9 00:42:09.888991 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 9 00:42:09.889064 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 9 00:42:09.889074 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 00:42:09.889082 kernel: thunder_xcv, ver 1.0 Oct 9 00:42:09.889089 kernel: thunder_bgx, ver 1.0 Oct 9 00:42:09.889098 kernel: nicpf, ver 1.0 Oct 9 00:42:09.889105 kernel: nicvf, ver 1.0 Oct 9 00:42:09.889179 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 9 00:42:09.889242 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-09T00:42:09 UTC (1728434529) Oct 9 00:42:09.889252 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 9 00:42:09.889259 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 9 00:42:09.889267 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 9 00:42:09.889276 kernel: watchdog: Hard watchdog permanently disabled Oct 9 00:42:09.889283 kernel: NET: Registered PF_INET6 protocol family Oct 9 00:42:09.889291 kernel: Segment Routing with IPv6 Oct 9 00:42:09.889299 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 00:42:09.889306 kernel: NET: Registered PF_PACKET protocol family Oct 9 00:42:09.889313 kernel: Key type dns_resolver registered Oct 9 00:42:09.889320 kernel: registered taskstats version 1 Oct 9 00:42:09.889328 kernel: Loading compiled-in X.509 certificates Oct 9 00:42:09.889335 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 80611b0a9480eaf6d787b908c6349fdb5d07fa81' Oct 9 00:42:09.889342 kernel: Key type .fscrypt registered Oct 9 00:42:09.889351 kernel: Key type fscrypt-provisioning registered Oct 9 00:42:09.889358 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 00:42:09.889381 kernel: ima: Allocated hash algorithm: sha1 Oct 9 00:42:09.889388 kernel: ima: No architecture policies found Oct 9 00:42:09.889396 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 9 00:42:09.889403 kernel: clk: Disabling unused clocks Oct 9 00:42:09.889410 kernel: Freeing unused kernel memory: 39552K Oct 9 00:42:09.889417 kernel: Run /init as init process Oct 9 00:42:09.889426 kernel: with arguments: Oct 9 00:42:09.889433 kernel: /init Oct 9 00:42:09.889440 kernel: with environment: Oct 9 00:42:09.889447 kernel: HOME=/ Oct 9 00:42:09.889454 kernel: TERM=linux Oct 9 00:42:09.889461 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 00:42:09.889470 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 00:42:09.889480 systemd[1]: Detected virtualization kvm. Oct 9 00:42:09.889489 systemd[1]: Detected architecture arm64. Oct 9 00:42:09.889497 systemd[1]: Running in initrd. Oct 9 00:42:09.889504 systemd[1]: No hostname configured, using default hostname. Oct 9 00:42:09.889512 systemd[1]: Hostname set to . Oct 9 00:42:09.889519 systemd[1]: Initializing machine ID from VM UUID. Oct 9 00:42:09.889527 systemd[1]: Queued start job for default target initrd.target. Oct 9 00:42:09.889535 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:42:09.889543 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:42:09.889552 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 00:42:09.889560 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 00:42:09.889568 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 00:42:09.889576 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 00:42:09.889585 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 00:42:09.889593 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 00:42:09.889601 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:42:09.889610 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:42:09.889618 systemd[1]: Reached target paths.target - Path Units. Oct 9 00:42:09.889625 systemd[1]: Reached target slices.target - Slice Units. Oct 9 00:42:09.889633 systemd[1]: Reached target swap.target - Swaps. Oct 9 00:42:09.889641 systemd[1]: Reached target timers.target - Timer Units. Oct 9 00:42:09.889648 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 00:42:09.889656 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 00:42:09.889664 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 00:42:09.889672 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 00:42:09.889681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:42:09.889688 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 00:42:09.889696 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:42:09.889704 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 00:42:09.889712 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 00:42:09.889719 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 00:42:09.889727 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 00:42:09.889735 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 00:42:09.889744 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 00:42:09.889752 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 00:42:09.889759 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:42:09.889767 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 00:42:09.889775 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:42:09.889783 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 00:42:09.889808 systemd-journald[238]: Collecting audit messages is disabled. Oct 9 00:42:09.889827 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 00:42:09.889835 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:42:09.889845 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:42:09.889854 systemd-journald[238]: Journal started Oct 9 00:42:09.889872 systemd-journald[238]: Runtime Journal (/run/log/journal/d65c8a13fff14e45ba2ad4fa904a7831) is 5.9M, max 47.3M, 41.4M free. Oct 9 00:42:09.875916 systemd-modules-load[239]: Inserted module 'overlay' Oct 9 00:42:09.893382 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 00:42:09.893409 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 00:42:09.894382 kernel: Bridge firewalling registered Oct 9 00:42:09.894673 systemd-modules-load[239]: Inserted module 'br_netfilter' Oct 9 00:42:09.895598 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 00:42:09.908659 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:42:09.909998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:42:09.911475 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 00:42:09.913702 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 00:42:09.921488 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:42:09.922611 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:42:09.923613 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:42:09.933551 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 00:42:09.935137 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:42:09.939482 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 00:42:09.949822 dracut-cmdline[278]: dracut-dracut-053 Oct 9 00:42:09.952219 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d2d67b5440410ae2d0aa86eba97891969be0a7a421fa55f13442706ef7ed2a5e Oct 9 00:42:09.964498 systemd-resolved[276]: Positive Trust Anchors: Oct 9 00:42:09.964573 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 00:42:09.964604 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 00:42:09.969223 systemd-resolved[276]: Defaulting to hostname 'linux'. Oct 9 00:42:09.970122 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 00:42:09.971165 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:42:10.017396 kernel: SCSI subsystem initialized Oct 9 00:42:10.021378 kernel: Loading iSCSI transport class v2.0-870. Oct 9 00:42:10.028392 kernel: iscsi: registered transport (tcp) Oct 9 00:42:10.041389 kernel: iscsi: registered transport (qla4xxx) Oct 9 00:42:10.041425 kernel: QLogic iSCSI HBA Driver Oct 9 00:42:10.080633 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 00:42:10.090519 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 00:42:10.106141 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 00:42:10.106185 kernel: device-mapper: uevent: version 1.0.3 Oct 9 00:42:10.107382 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 00:42:10.153396 kernel: raid6: neonx8 gen() 15776 MB/s Oct 9 00:42:10.170398 kernel: raid6: neonx4 gen() 15675 MB/s Oct 9 00:42:10.187382 kernel: raid6: neonx2 gen() 13212 MB/s Oct 9 00:42:10.204389 kernel: raid6: neonx1 gen() 10473 MB/s Oct 9 00:42:10.221388 kernel: raid6: int64x8 gen() 6958 MB/s Oct 9 00:42:10.238380 kernel: raid6: int64x4 gen() 7344 MB/s Oct 9 00:42:10.255378 kernel: raid6: int64x2 gen() 6134 MB/s Oct 9 00:42:10.272382 kernel: raid6: int64x1 gen() 5056 MB/s Oct 9 00:42:10.272396 kernel: raid6: using algorithm neonx8 gen() 15776 MB/s Oct 9 00:42:10.289384 kernel: raid6: .... xor() 11945 MB/s, rmw enabled Oct 9 00:42:10.289396 kernel: raid6: using neon recovery algorithm Oct 9 00:42:10.294607 kernel: xor: measuring software checksum speed Oct 9 00:42:10.294621 kernel: 8regs : 19664 MB/sec Oct 9 00:42:10.294630 kernel: 32regs : 19664 MB/sec Oct 9 00:42:10.295492 kernel: arm64_neon : 26945 MB/sec Oct 9 00:42:10.295503 kernel: xor: using function: arm64_neon (26945 MB/sec) Oct 9 00:42:10.346221 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 00:42:10.357331 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 00:42:10.368536 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:42:10.379388 systemd-udevd[461]: Using default interface naming scheme 'v255'. Oct 9 00:42:10.382504 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:42:10.388512 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 00:42:10.399506 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Oct 9 00:42:10.424642 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 00:42:10.441627 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 00:42:10.481114 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:42:10.487566 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 00:42:10.499637 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 00:42:10.500631 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 00:42:10.501879 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:42:10.503399 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 00:42:10.511520 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 00:42:10.520334 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 00:42:10.528401 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 9 00:42:10.530499 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 00:42:10.533383 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 00:42:10.536511 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 00:42:10.536538 kernel: GPT:9289727 != 19775487 Oct 9 00:42:10.536548 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 00:42:10.536557 kernel: GPT:9289727 != 19775487 Oct 9 00:42:10.536567 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 00:42:10.536577 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:42:10.533497 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:42:10.537832 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:42:10.538925 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:42:10.539064 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:42:10.541481 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:42:10.547664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:42:10.556626 kernel: BTRFS: device fsid c25b3a2f-539f-42a7-8842-97b35e474647 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (520) Oct 9 00:42:10.558397 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (510) Oct 9 00:42:10.559390 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:42:10.569442 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 00:42:10.573502 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 00:42:10.576987 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 00:42:10.577872 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 00:42:10.583241 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 00:42:10.594511 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 00:42:10.595960 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:42:10.609750 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:42:10.705481 disk-uuid[554]: Primary Header is updated. Oct 9 00:42:10.705481 disk-uuid[554]: Secondary Entries is updated. Oct 9 00:42:10.705481 disk-uuid[554]: Secondary Header is updated. Oct 9 00:42:10.708394 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:42:11.718388 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:42:11.719012 disk-uuid[563]: The operation has completed successfully. Oct 9 00:42:11.739452 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 00:42:11.739543 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 00:42:11.756520 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 00:42:11.759134 sh[576]: Success Oct 9 00:42:11.774710 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 9 00:42:11.801122 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 00:42:11.808616 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 00:42:11.811791 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 00:42:11.818597 kernel: BTRFS info (device dm-0): first mount of filesystem c25b3a2f-539f-42a7-8842-97b35e474647 Oct 9 00:42:11.818640 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 9 00:42:11.818661 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 00:42:11.819888 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 00:42:11.819901 kernel: BTRFS info (device dm-0): using free space tree Oct 9 00:42:11.823325 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 00:42:11.824390 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 00:42:11.832562 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 00:42:11.834538 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 00:42:11.842531 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:42:11.842567 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 00:42:11.842583 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:42:11.845403 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:42:11.851138 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 00:42:11.852398 kernel: BTRFS info (device vda6): last unmount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:42:11.858019 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 00:42:11.862600 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 00:42:11.932895 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 00:42:11.948545 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 00:42:11.964864 ignition[673]: Ignition 2.19.0 Oct 9 00:42:11.964874 ignition[673]: Stage: fetch-offline Oct 9 00:42:11.964906 ignition[673]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:42:11.964914 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:42:11.965067 ignition[673]: parsed url from cmdline: "" Oct 9 00:42:11.965071 ignition[673]: no config URL provided Oct 9 00:42:11.965075 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 00:42:11.965082 ignition[673]: no config at "/usr/lib/ignition/user.ign" Oct 9 00:42:11.965110 ignition[673]: op(1): [started] loading QEMU firmware config module Oct 9 00:42:11.970996 systemd-networkd[769]: lo: Link UP Oct 9 00:42:11.965115 ignition[673]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 00:42:11.970999 systemd-networkd[769]: lo: Gained carrier Oct 9 00:42:11.971709 systemd-networkd[769]: Enumeration completed Oct 9 00:42:11.976075 ignition[673]: op(1): [finished] loading QEMU firmware config module Oct 9 00:42:11.971941 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 00:42:11.972101 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:42:11.972104 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 00:42:11.972827 systemd-networkd[769]: eth0: Link UP Oct 9 00:42:11.972830 systemd-networkd[769]: eth0: Gained carrier Oct 9 00:42:11.972845 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:42:11.973346 systemd[1]: Reached target network.target - Network. Oct 9 00:42:11.987417 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.37/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 00:42:12.019467 ignition[673]: parsing config with SHA512: 0902295075b8f97c5716363cd1405cf80f7e1c88fbedfaab791de74fb72a2a99ac02b209e884a58fcffddb4820e6fc991719d9f1671219e01c9417bf7cefe338 Oct 9 00:42:12.023768 unknown[673]: fetched base config from "system" Oct 9 00:42:12.023778 unknown[673]: fetched user config from "qemu" Oct 9 00:42:12.025784 ignition[673]: fetch-offline: fetch-offline passed Oct 9 00:42:12.025886 ignition[673]: Ignition finished successfully Oct 9 00:42:12.027352 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 00:42:12.028330 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 00:42:12.037523 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 00:42:12.047679 ignition[776]: Ignition 2.19.0 Oct 9 00:42:12.047689 ignition[776]: Stage: kargs Oct 9 00:42:12.047848 ignition[776]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:42:12.047858 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:42:12.048819 ignition[776]: kargs: kargs passed Oct 9 00:42:12.048871 ignition[776]: Ignition finished successfully Oct 9 00:42:12.051020 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 00:42:12.065520 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 00:42:12.074710 ignition[786]: Ignition 2.19.0 Oct 9 00:42:12.074720 ignition[786]: Stage: disks Oct 9 00:42:12.074888 ignition[786]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:42:12.074897 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:42:12.075846 ignition[786]: disks: disks passed Oct 9 00:42:12.075890 ignition[786]: Ignition finished successfully Oct 9 00:42:12.077662 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 00:42:12.078977 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 00:42:12.080175 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 00:42:12.081632 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 00:42:12.083051 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 00:42:12.084395 systemd[1]: Reached target basic.target - Basic System. Oct 9 00:42:12.105506 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 00:42:12.114536 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 00:42:12.118390 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 00:42:12.124506 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 00:42:12.169126 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 00:42:12.170235 kernel: EXT4-fs (vda9): mounted filesystem 3a4adf89-ce2b-46a9-8e1a-433a27a27d16 r/w with ordered data mode. Quota mode: none. Oct 9 00:42:12.170110 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 00:42:12.187450 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 00:42:12.189281 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 00:42:12.190158 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 00:42:12.190195 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 00:42:12.190215 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 00:42:12.195586 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 00:42:12.199179 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Oct 9 00:42:12.199199 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:42:12.199209 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 00:42:12.199219 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:42:12.197352 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 00:42:12.202404 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:42:12.203224 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 00:42:12.239379 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 00:42:12.243208 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Oct 9 00:42:12.246595 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 00:42:12.250031 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 00:42:12.314899 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 00:42:12.326446 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 00:42:12.327689 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 00:42:12.332379 kernel: BTRFS info (device vda6): last unmount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:42:12.346934 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 00:42:12.349085 ignition[919]: INFO : Ignition 2.19.0 Oct 9 00:42:12.349085 ignition[919]: INFO : Stage: mount Oct 9 00:42:12.350665 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:42:12.350665 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:42:12.350665 ignition[919]: INFO : mount: mount passed Oct 9 00:42:12.350665 ignition[919]: INFO : Ignition finished successfully Oct 9 00:42:12.351403 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 00:42:12.359459 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 00:42:12.818090 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 00:42:12.831525 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 00:42:12.836396 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (933) Oct 9 00:42:12.838566 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 00:42:12.838589 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 00:42:12.838599 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:42:12.840525 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:42:12.841473 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 00:42:12.856610 ignition[950]: INFO : Ignition 2.19.0 Oct 9 00:42:12.856610 ignition[950]: INFO : Stage: files Oct 9 00:42:12.857804 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:42:12.857804 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:42:12.857804 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Oct 9 00:42:12.860455 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 00:42:12.860455 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 00:42:12.860455 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 00:42:12.860455 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 00:42:12.864221 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 00:42:12.864221 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 00:42:12.864221 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 00:42:12.864221 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 9 00:42:12.864221 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 9 00:42:12.860658 unknown[950]: wrote ssh authorized keys file for user: core Oct 9 00:42:12.905639 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 00:42:13.097949 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 9 00:42:13.097949 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 9 00:42:13.100767 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 9 00:42:13.398476 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 9 00:42:13.492503 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 9 00:42:13.493756 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Oct 9 00:42:13.493756 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 00:42:13.493756 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 00:42:13.493756 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 00:42:13.493756 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 00:42:13.493756 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 00:42:13.493756 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 00:42:13.493756 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 00:42:13.503038 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 00:42:13.503038 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 00:42:13.503038 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 00:42:13.503038 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 00:42:13.503038 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 00:42:13.503038 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Oct 9 00:42:13.702946 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Oct 9 00:42:13.891672 systemd-networkd[769]: eth0: Gained IPv6LL Oct 9 00:42:14.055948 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 00:42:14.055948 ignition[950]: INFO : files: op(d): [started] processing unit "containerd.service" Oct 9 00:42:14.058772 ignition[950]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 00:42:14.058772 ignition[950]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 00:42:14.058772 ignition[950]: INFO : files: op(d): [finished] processing unit "containerd.service" Oct 9 00:42:14.058772 ignition[950]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Oct 9 00:42:14.058772 ignition[950]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 00:42:14.058772 ignition[950]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 00:42:14.058772 ignition[950]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Oct 9 00:42:14.058772 ignition[950]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Oct 9 00:42:14.058772 ignition[950]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 00:42:14.058772 ignition[950]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 00:42:14.058772 ignition[950]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Oct 9 00:42:14.058772 ignition[950]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 00:42:14.078801 ignition[950]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 00:42:14.082256 ignition[950]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 00:42:14.083424 ignition[950]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 00:42:14.083424 ignition[950]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Oct 9 00:42:14.083424 ignition[950]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 00:42:14.083424 ignition[950]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 00:42:14.083424 ignition[950]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 00:42:14.083424 ignition[950]: INFO : files: files passed Oct 9 00:42:14.083424 ignition[950]: INFO : Ignition finished successfully Oct 9 00:42:14.085076 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 00:42:14.093545 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 00:42:14.095609 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 00:42:14.098003 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 00:42:14.098115 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 00:42:14.102906 initrd-setup-root-after-ignition[979]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 00:42:14.105987 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:42:14.105987 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:42:14.108457 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:42:14.111392 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 00:42:14.112406 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 00:42:14.119496 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 00:42:14.136619 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 00:42:14.136745 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 00:42:14.138293 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 00:42:14.141101 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 00:42:14.142384 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 00:42:14.143060 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 00:42:14.156874 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 00:42:14.164568 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 00:42:14.171651 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:42:14.172542 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:42:14.174005 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 00:42:14.175261 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 00:42:14.175377 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 00:42:14.177247 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 00:42:14.178699 systemd[1]: Stopped target basic.target - Basic System. Oct 9 00:42:14.179901 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 00:42:14.181143 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 00:42:14.182541 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 00:42:14.183945 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 00:42:14.185345 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 00:42:14.186873 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 00:42:14.188241 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 00:42:14.189487 systemd[1]: Stopped target swap.target - Swaps. Oct 9 00:42:14.190655 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 00:42:14.190759 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 00:42:14.192444 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:42:14.193811 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:42:14.195289 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 00:42:14.198432 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:42:14.199328 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 00:42:14.199453 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 00:42:14.201513 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 00:42:14.201627 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 00:42:14.203041 systemd[1]: Stopped target paths.target - Path Units. Oct 9 00:42:14.204234 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 00:42:14.207417 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:42:14.208446 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 00:42:14.210036 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 00:42:14.211127 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 00:42:14.211211 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 00:42:14.212349 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 00:42:14.212439 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 00:42:14.213595 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 00:42:14.213699 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 00:42:14.215045 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 00:42:14.215142 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 00:42:14.229603 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 00:42:14.231529 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 00:42:14.232152 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 00:42:14.232258 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:42:14.233703 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 00:42:14.233789 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 00:42:14.238913 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 00:42:14.238998 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 00:42:14.241358 ignition[1005]: INFO : Ignition 2.19.0 Oct 9 00:42:14.241358 ignition[1005]: INFO : Stage: umount Oct 9 00:42:14.241358 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:42:14.241358 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 00:42:14.244323 ignition[1005]: INFO : umount: umount passed Oct 9 00:42:14.244323 ignition[1005]: INFO : Ignition finished successfully Oct 9 00:42:14.244603 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 00:42:14.244689 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 00:42:14.247757 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 00:42:14.248160 systemd[1]: Stopped target network.target - Network. Oct 9 00:42:14.248933 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 00:42:14.248989 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 00:42:14.250281 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 00:42:14.250321 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 00:42:14.251528 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 00:42:14.251564 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 00:42:14.252778 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 00:42:14.252830 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 00:42:14.253723 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 00:42:14.255906 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 00:42:14.264477 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 00:42:14.264590 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 00:42:14.267338 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 00:42:14.267424 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:42:14.268398 systemd-networkd[769]: eth0: DHCPv6 lease lost Oct 9 00:42:14.270211 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 00:42:14.270321 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 00:42:14.272048 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 00:42:14.272081 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:42:14.280521 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 00:42:14.281227 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 00:42:14.281280 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 00:42:14.282691 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 00:42:14.282731 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:42:14.284066 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 00:42:14.284103 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 00:42:14.285734 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:42:14.293943 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 00:42:14.294060 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 00:42:14.304100 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 00:42:14.304251 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:42:14.306023 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 00:42:14.306059 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 00:42:14.307332 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 00:42:14.307360 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:42:14.308820 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 00:42:14.308866 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 00:42:14.310911 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 00:42:14.310954 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 00:42:14.313168 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 00:42:14.313211 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:42:14.331549 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 00:42:14.332346 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 00:42:14.332418 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:42:14.334015 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 00:42:14.334052 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:42:14.335466 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 00:42:14.335501 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:42:14.337064 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:42:14.337101 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:42:14.338761 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 00:42:14.338853 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 00:42:14.341335 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 00:42:14.341432 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 00:42:14.343127 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 00:42:14.344073 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 00:42:14.344130 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 00:42:14.346099 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 00:42:14.354649 systemd[1]: Switching root. Oct 9 00:42:14.381928 systemd-journald[238]: Journal stopped Oct 9 00:42:15.062240 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Oct 9 00:42:15.062306 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 00:42:15.062319 kernel: SELinux: policy capability open_perms=1 Oct 9 00:42:15.062329 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 00:42:15.062341 kernel: SELinux: policy capability always_check_network=0 Oct 9 00:42:15.062355 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 00:42:15.062417 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 00:42:15.062429 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 00:42:15.062439 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 00:42:15.062449 kernel: audit: type=1403 audit(1728434534.572:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 00:42:15.062460 systemd[1]: Successfully loaded SELinux policy in 29.959ms. Oct 9 00:42:15.062481 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.830ms. Oct 9 00:42:15.062492 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 00:42:15.062505 systemd[1]: Detected virtualization kvm. Oct 9 00:42:15.062516 systemd[1]: Detected architecture arm64. Oct 9 00:42:15.062526 systemd[1]: Detected first boot. Oct 9 00:42:15.062536 systemd[1]: Initializing machine ID from VM UUID. Oct 9 00:42:15.062546 zram_generator::config[1067]: No configuration found. Oct 9 00:42:15.062558 systemd[1]: Populated /etc with preset unit settings. Oct 9 00:42:15.062572 systemd[1]: Queued start job for default target multi-user.target. Oct 9 00:42:15.062583 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 00:42:15.062594 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 00:42:15.062609 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 00:42:15.062619 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 00:42:15.062630 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 00:42:15.062641 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 00:42:15.062652 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 00:42:15.062663 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 00:42:15.062674 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 00:42:15.062684 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:42:15.062696 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:42:15.062707 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 00:42:15.062721 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 00:42:15.062732 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 00:42:15.062742 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 00:42:15.062753 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 9 00:42:15.062765 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:42:15.062784 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 00:42:15.062797 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:42:15.062811 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 00:42:15.062822 systemd[1]: Reached target slices.target - Slice Units. Oct 9 00:42:15.062832 systemd[1]: Reached target swap.target - Swaps. Oct 9 00:42:15.062842 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 00:42:15.062853 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 00:42:15.062864 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 00:42:15.062874 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 00:42:15.062885 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:42:15.062898 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 00:42:15.062908 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:42:15.062919 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 00:42:15.062929 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 00:42:15.062939 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 00:42:15.062950 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 00:42:15.062960 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 00:42:15.062970 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 00:42:15.062981 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 00:42:15.062993 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 00:42:15.063005 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:42:15.063016 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 00:42:15.063027 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 00:42:15.063037 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:42:15.063048 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 00:42:15.063058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:42:15.063069 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 00:42:15.063079 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:42:15.063092 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 00:42:15.063102 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 9 00:42:15.063113 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 9 00:42:15.063124 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 00:42:15.063135 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 00:42:15.063145 kernel: fuse: init (API version 7.39) Oct 9 00:42:15.063154 kernel: loop: module loaded Oct 9 00:42:15.063164 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 00:42:15.063176 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 00:42:15.063187 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 00:42:15.063219 systemd-journald[1153]: Collecting audit messages is disabled. Oct 9 00:42:15.063241 kernel: ACPI: bus type drm_connector registered Oct 9 00:42:15.063252 systemd-journald[1153]: Journal started Oct 9 00:42:15.063274 systemd-journald[1153]: Runtime Journal (/run/log/journal/d65c8a13fff14e45ba2ad4fa904a7831) is 5.9M, max 47.3M, 41.4M free. Oct 9 00:42:15.066849 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 00:42:15.067788 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 00:42:15.068700 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 00:42:15.069605 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 00:42:15.070409 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 00:42:15.071288 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 00:42:15.072415 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 00:42:15.073610 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:42:15.074762 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 00:42:15.074922 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 00:42:15.076079 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 00:42:15.077273 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:42:15.077438 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:42:15.078707 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 00:42:15.078862 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 00:42:15.080053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:42:15.080202 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:42:15.081307 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 00:42:15.081483 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 00:42:15.082483 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:42:15.082677 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:42:15.083958 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 00:42:15.085292 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 00:42:15.086514 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 00:42:15.097354 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 00:42:15.112530 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 00:42:15.114235 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 00:42:15.115146 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 00:42:15.117532 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 00:42:15.120528 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 00:42:15.121490 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:42:15.124261 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 00:42:15.125770 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:42:15.127688 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:42:15.133160 systemd-journald[1153]: Time spent on flushing to /var/log/journal/d65c8a13fff14e45ba2ad4fa904a7831 is 16.859ms for 850 entries. Oct 9 00:42:15.133160 systemd-journald[1153]: System Journal (/var/log/journal/d65c8a13fff14e45ba2ad4fa904a7831) is 8.0M, max 195.6M, 187.6M free. Oct 9 00:42:15.163563 systemd-journald[1153]: Received client request to flush runtime journal. Oct 9 00:42:15.133512 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 00:42:15.136430 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:42:15.137619 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 00:42:15.140890 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 00:42:15.141985 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 00:42:15.144912 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 00:42:15.155523 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 00:42:15.156661 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:42:15.162419 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Oct 9 00:42:15.162430 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Oct 9 00:42:15.165587 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 00:42:15.166292 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 00:42:15.167964 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:42:15.176570 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 00:42:15.193967 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 00:42:15.205511 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 00:42:15.216024 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Oct 9 00:42:15.216044 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Oct 9 00:42:15.219712 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:42:15.525554 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 00:42:15.538662 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:42:15.556429 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Oct 9 00:42:15.568670 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:42:15.578870 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 00:42:15.598401 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1245) Oct 9 00:42:15.597612 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 00:42:15.604801 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Oct 9 00:42:15.619195 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1249) Oct 9 00:42:15.622404 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1249) Oct 9 00:42:15.628929 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 00:42:15.660492 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 00:42:15.684621 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:42:15.695286 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 00:42:15.711595 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 00:42:15.725479 lvm[1275]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 00:42:15.732792 systemd-networkd[1246]: lo: Link UP Oct 9 00:42:15.733075 systemd-networkd[1246]: lo: Gained carrier Oct 9 00:42:15.733973 systemd-networkd[1246]: Enumeration completed Oct 9 00:42:15.734256 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 00:42:15.734689 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:42:15.734777 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 00:42:15.735352 systemd-networkd[1246]: eth0: Link UP Oct 9 00:42:15.735486 systemd-networkd[1246]: eth0: Gained carrier Oct 9 00:42:15.735506 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:42:15.740513 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 00:42:15.741590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:42:15.745716 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 00:42:15.747225 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:42:15.749218 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 00:42:15.754422 systemd-networkd[1246]: eth0: DHCPv4 address 10.0.0.37/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 00:42:15.755632 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 00:42:15.781607 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 00:42:15.782682 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 00:42:15.783624 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 00:42:15.783655 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 00:42:15.784381 systemd[1]: Reached target machines.target - Containers. Oct 9 00:42:15.786005 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 00:42:15.793545 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 00:42:15.795403 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 00:42:15.796241 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:42:15.797088 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 00:42:15.799507 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 00:42:15.802662 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 00:42:15.806133 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 00:42:15.816500 kernel: loop0: detected capacity change from 0 to 194512 Oct 9 00:42:15.815841 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 00:42:15.821294 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 00:42:15.821945 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 00:42:15.824380 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 00:42:15.870398 kernel: loop1: detected capacity change from 0 to 113456 Oct 9 00:42:15.924394 kernel: loop2: detected capacity change from 0 to 116808 Oct 9 00:42:15.966392 kernel: loop3: detected capacity change from 0 to 194512 Oct 9 00:42:15.973427 kernel: loop4: detected capacity change from 0 to 113456 Oct 9 00:42:15.978431 kernel: loop5: detected capacity change from 0 to 116808 Oct 9 00:42:15.983548 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 00:42:15.983923 (sd-merge)[1313]: Merged extensions into '/usr'. Oct 9 00:42:15.988262 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 00:42:15.988277 systemd[1]: Reloading... Oct 9 00:42:16.026428 ldconfig[1294]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 00:42:16.032392 zram_generator::config[1340]: No configuration found. Oct 9 00:42:16.122973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:42:16.164064 systemd[1]: Reloading finished in 175 ms. Oct 9 00:42:16.178976 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 00:42:16.180160 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 00:42:16.197579 systemd[1]: Starting ensure-sysext.service... Oct 9 00:42:16.199157 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 00:42:16.202351 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Oct 9 00:42:16.202386 systemd[1]: Reloading... Oct 9 00:42:16.214095 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 00:42:16.214342 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 00:42:16.214997 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 00:42:16.215209 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Oct 9 00:42:16.215252 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Oct 9 00:42:16.217536 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 00:42:16.217549 systemd-tmpfiles[1383]: Skipping /boot Oct 9 00:42:16.224205 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 00:42:16.224221 systemd-tmpfiles[1383]: Skipping /boot Oct 9 00:42:16.241722 zram_generator::config[1412]: No configuration found. Oct 9 00:42:16.328069 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:42:16.369388 systemd[1]: Reloading finished in 166 ms. Oct 9 00:42:16.380980 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:42:16.395075 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:42:16.397146 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 00:42:16.399085 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 00:42:16.403536 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 00:42:16.406895 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 00:42:16.412840 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:42:16.426624 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:42:16.429648 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:42:16.433320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:42:16.434820 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:42:16.436031 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 00:42:16.441629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:42:16.441793 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:42:16.443117 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:42:16.443254 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:42:16.444850 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:42:16.445035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:42:16.446713 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 00:42:16.454768 augenrules[1494]: No rules Oct 9 00:42:16.455551 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:42:16.455764 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:42:16.465798 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 00:42:16.467244 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:42:16.468662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:42:16.470565 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:42:16.473521 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:42:16.474542 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:42:16.476608 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 00:42:16.477354 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 00:42:16.478146 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:42:16.478312 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:42:16.479703 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:42:16.479850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:42:16.481203 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:42:16.482630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:42:16.492584 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:42:16.493330 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:42:16.494664 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:42:16.497621 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 00:42:16.500673 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:42:16.505201 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:42:16.506117 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:42:16.506251 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 00:42:16.507331 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 00:42:16.508634 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:42:16.508773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:42:16.511726 augenrules[1516]: /sbin/augenrules: No change Oct 9 00:42:16.513599 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 00:42:16.514315 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 00:42:16.515959 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:42:16.516091 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:42:16.517914 augenrules[1543]: No rules Oct 9 00:42:16.518208 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:42:16.519568 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:42:16.522745 systemd[1]: Finished ensure-sysext.service. Oct 9 00:42:16.523594 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:42:16.523810 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:42:16.524430 systemd-resolved[1460]: Positive Trust Anchors: Oct 9 00:42:16.524506 systemd-resolved[1460]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 00:42:16.524538 systemd-resolved[1460]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 00:42:16.529932 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:42:16.530014 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:42:16.532051 systemd-resolved[1460]: Defaulting to hostname 'linux'. Oct 9 00:42:16.541499 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 00:42:16.542405 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 00:42:16.543395 systemd[1]: Reached target network.target - Network. Oct 9 00:42:16.544042 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:42:16.586256 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 00:42:17.035583 systemd-resolved[1460]: Clock change detected. Flushing caches. Oct 9 00:42:17.035627 systemd-timesyncd[1559]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 00:42:17.035669 systemd-timesyncd[1559]: Initial clock synchronization to Wed 2024-10-09 00:42:17.035539 UTC. Oct 9 00:42:17.035795 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 00:42:17.036628 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 00:42:17.037506 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 00:42:17.038391 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 00:42:17.039326 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 00:42:17.039358 systemd[1]: Reached target paths.target - Path Units. Oct 9 00:42:17.040028 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 00:42:17.040887 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 00:42:17.041752 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 00:42:17.042647 systemd[1]: Reached target timers.target - Timer Units. Oct 9 00:42:17.043891 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 00:42:17.045939 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 00:42:17.047899 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 00:42:17.056417 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 00:42:17.057197 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 00:42:17.057920 systemd[1]: Reached target basic.target - Basic System. Oct 9 00:42:17.058709 systemd[1]: System is tainted: cgroupsv1 Oct 9 00:42:17.058751 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 00:42:17.058770 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 00:42:17.059795 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 00:42:17.061512 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 00:42:17.063083 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 00:42:17.067595 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 00:42:17.068364 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 00:42:17.069385 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 00:42:17.075609 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 00:42:17.077490 jq[1565]: false Oct 9 00:42:17.079584 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 00:42:17.083047 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 00:42:17.086619 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 00:42:17.091456 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 00:42:17.094571 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 00:42:17.098584 extend-filesystems[1566]: Found loop3 Oct 9 00:42:17.098584 extend-filesystems[1566]: Found loop4 Oct 9 00:42:17.098584 extend-filesystems[1566]: Found loop5 Oct 9 00:42:17.098584 extend-filesystems[1566]: Found vda Oct 9 00:42:17.098584 extend-filesystems[1566]: Found vda1 Oct 9 00:42:17.098584 extend-filesystems[1566]: Found vda2 Oct 9 00:42:17.098584 extend-filesystems[1566]: Found vda3 Oct 9 00:42:17.098584 extend-filesystems[1566]: Found usr Oct 9 00:42:17.098584 extend-filesystems[1566]: Found vda4 Oct 9 00:42:17.098584 extend-filesystems[1566]: Found vda6 Oct 9 00:42:17.098584 extend-filesystems[1566]: Found vda7 Oct 9 00:42:17.098584 extend-filesystems[1566]: Found vda9 Oct 9 00:42:17.098584 extend-filesystems[1566]: Checking size of /dev/vda9 Oct 9 00:42:17.097215 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 00:42:17.114316 dbus-daemon[1564]: [system] SELinux support is enabled Oct 9 00:42:17.101723 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 00:42:17.119718 extend-filesystems[1566]: Resized partition /dev/vda9 Oct 9 00:42:17.101963 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 00:42:17.121221 extend-filesystems[1595]: resize2fs 1.47.1 (20-May-2024) Oct 9 00:42:17.123904 jq[1585]: true Oct 9 00:42:17.102192 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 00:42:17.102372 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 00:42:17.115675 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 00:42:17.122617 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 00:42:17.122842 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 00:42:17.126301 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1258) Oct 9 00:42:17.126378 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 00:42:17.145449 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 00:42:17.146368 jq[1596]: true Oct 9 00:42:17.158255 extend-filesystems[1595]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 00:42:17.158255 extend-filesystems[1595]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 00:42:17.158255 extend-filesystems[1595]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 00:42:17.164163 extend-filesystems[1566]: Resized filesystem in /dev/vda9 Oct 9 00:42:17.161330 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 00:42:17.161965 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 00:42:17.163033 (ntainerd)[1605]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 00:42:17.167209 tar[1591]: linux-arm64/helm Oct 9 00:42:17.172643 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 00:42:17.172681 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 00:42:17.173657 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 00:42:17.173681 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 00:42:17.175182 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (Power Button) Oct 9 00:42:17.176317 systemd-logind[1577]: New seat seat0. Oct 9 00:42:17.177469 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 00:42:17.192693 update_engine[1582]: I20241009 00:42:17.192515 1582 main.cc:92] Flatcar Update Engine starting Oct 9 00:42:17.194847 systemd[1]: Started update-engine.service - Update Engine. Oct 9 00:42:17.195028 update_engine[1582]: I20241009 00:42:17.194994 1582 update_check_scheduler.cc:74] Next update check in 2m20s Oct 9 00:42:17.196317 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 00:42:17.208664 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 00:42:17.217529 bash[1627]: Updated "/home/core/.ssh/authorized_keys" Oct 9 00:42:17.218354 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 00:42:17.219971 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 00:42:17.250593 locksmithd[1628]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 00:42:17.372960 containerd[1605]: time="2024-10-09T00:42:17.372834579Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 00:42:17.399142 containerd[1605]: time="2024-10-09T00:42:17.399107379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:42:17.400693 containerd[1605]: time="2024-10-09T00:42:17.400654979Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:42:17.400885 containerd[1605]: time="2024-10-09T00:42:17.400866379Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 00:42:17.401459 containerd[1605]: time="2024-10-09T00:42:17.400944979Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 00:42:17.401459 containerd[1605]: time="2024-10-09T00:42:17.401095459Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 00:42:17.401459 containerd[1605]: time="2024-10-09T00:42:17.401121139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 00:42:17.401459 containerd[1605]: time="2024-10-09T00:42:17.401283419Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:42:17.401459 containerd[1605]: time="2024-10-09T00:42:17.401296219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:42:17.401596 containerd[1605]: time="2024-10-09T00:42:17.401506739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:42:17.401596 containerd[1605]: time="2024-10-09T00:42:17.401522619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 00:42:17.401596 containerd[1605]: time="2024-10-09T00:42:17.401535819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:42:17.401596 containerd[1605]: time="2024-10-09T00:42:17.401544219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 00:42:17.401669 containerd[1605]: time="2024-10-09T00:42:17.401613459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:42:17.401833 containerd[1605]: time="2024-10-09T00:42:17.401793819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:42:17.401962 containerd[1605]: time="2024-10-09T00:42:17.401938219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:42:17.401962 containerd[1605]: time="2024-10-09T00:42:17.401958299Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 00:42:17.402083 containerd[1605]: time="2024-10-09T00:42:17.402060059Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 00:42:17.402126 containerd[1605]: time="2024-10-09T00:42:17.402112739Z" level=info msg="metadata content store policy set" policy=shared Oct 9 00:42:17.405484 containerd[1605]: time="2024-10-09T00:42:17.405457059Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 00:42:17.405624 containerd[1605]: time="2024-10-09T00:42:17.405598219Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 00:42:17.405649 containerd[1605]: time="2024-10-09T00:42:17.405630779Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 00:42:17.405649 containerd[1605]: time="2024-10-09T00:42:17.405645779Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 00:42:17.405682 containerd[1605]: time="2024-10-09T00:42:17.405659459Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 00:42:17.405799 containerd[1605]: time="2024-10-09T00:42:17.405782019Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 00:42:17.406088 containerd[1605]: time="2024-10-09T00:42:17.406069819Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 00:42:17.406190 containerd[1605]: time="2024-10-09T00:42:17.406174499Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 00:42:17.406211 containerd[1605]: time="2024-10-09T00:42:17.406194739Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 00:42:17.406229 containerd[1605]: time="2024-10-09T00:42:17.406209939Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 00:42:17.406229 containerd[1605]: time="2024-10-09T00:42:17.406222899Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 00:42:17.406262 containerd[1605]: time="2024-10-09T00:42:17.406233939Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 00:42:17.406262 containerd[1605]: time="2024-10-09T00:42:17.406245219Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 00:42:17.406262 containerd[1605]: time="2024-10-09T00:42:17.406257139Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 00:42:17.406315 containerd[1605]: time="2024-10-09T00:42:17.406269619Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 00:42:17.406315 containerd[1605]: time="2024-10-09T00:42:17.406285099Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 00:42:17.406315 containerd[1605]: time="2024-10-09T00:42:17.406296419Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 00:42:17.406315 containerd[1605]: time="2024-10-09T00:42:17.406306499Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 00:42:17.406376 containerd[1605]: time="2024-10-09T00:42:17.406324859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406376 containerd[1605]: time="2024-10-09T00:42:17.406337819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406376 containerd[1605]: time="2024-10-09T00:42:17.406349259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406376 containerd[1605]: time="2024-10-09T00:42:17.406360179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406376 containerd[1605]: time="2024-10-09T00:42:17.406370779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406484 containerd[1605]: time="2024-10-09T00:42:17.406382779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406484 containerd[1605]: time="2024-10-09T00:42:17.406393859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406484 containerd[1605]: time="2024-10-09T00:42:17.406405979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406484 containerd[1605]: time="2024-10-09T00:42:17.406417539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406484 containerd[1605]: time="2024-10-09T00:42:17.406446579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406484 containerd[1605]: time="2024-10-09T00:42:17.406459779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406484 containerd[1605]: time="2024-10-09T00:42:17.406470899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406484 containerd[1605]: time="2024-10-09T00:42:17.406481659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406614 containerd[1605]: time="2024-10-09T00:42:17.406495899Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 00:42:17.406614 containerd[1605]: time="2024-10-09T00:42:17.406514259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406614 containerd[1605]: time="2024-10-09T00:42:17.406526379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406614 containerd[1605]: time="2024-10-09T00:42:17.406536339Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 00:42:17.406678 containerd[1605]: time="2024-10-09T00:42:17.406637539Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 00:42:17.406678 containerd[1605]: time="2024-10-09T00:42:17.406652259Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 00:42:17.406678 containerd[1605]: time="2024-10-09T00:42:17.406662139Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 00:42:17.406678 containerd[1605]: time="2024-10-09T00:42:17.406673419Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 00:42:17.406746 containerd[1605]: time="2024-10-09T00:42:17.406682419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.406746 containerd[1605]: time="2024-10-09T00:42:17.406694819Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 00:42:17.406746 containerd[1605]: time="2024-10-09T00:42:17.406703899Z" level=info msg="NRI interface is disabled by configuration." Oct 9 00:42:17.406746 containerd[1605]: time="2024-10-09T00:42:17.406713899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 00:42:17.407082 containerd[1605]: time="2024-10-09T00:42:17.407038499Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 00:42:17.407178 containerd[1605]: time="2024-10-09T00:42:17.407090019Z" level=info msg="Connect containerd service" Oct 9 00:42:17.407178 containerd[1605]: time="2024-10-09T00:42:17.407118659Z" level=info msg="using legacy CRI server" Oct 9 00:42:17.407178 containerd[1605]: time="2024-10-09T00:42:17.407124739Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 00:42:17.407237 containerd[1605]: time="2024-10-09T00:42:17.407201259Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 00:42:17.409463 containerd[1605]: time="2024-10-09T00:42:17.409372059Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 00:42:17.410451 containerd[1605]: time="2024-10-09T00:42:17.409957459Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 00:42:17.410451 containerd[1605]: time="2024-10-09T00:42:17.410005419Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 00:42:17.410451 containerd[1605]: time="2024-10-09T00:42:17.410013659Z" level=info msg="Start subscribing containerd event" Oct 9 00:42:17.410451 containerd[1605]: time="2024-10-09T00:42:17.410070019Z" level=info msg="Start recovering state" Oct 9 00:42:17.410451 containerd[1605]: time="2024-10-09T00:42:17.410130859Z" level=info msg="Start event monitor" Oct 9 00:42:17.410451 containerd[1605]: time="2024-10-09T00:42:17.410147139Z" level=info msg="Start snapshots syncer" Oct 9 00:42:17.410451 containerd[1605]: time="2024-10-09T00:42:17.410155139Z" level=info msg="Start cni network conf syncer for default" Oct 9 00:42:17.410451 containerd[1605]: time="2024-10-09T00:42:17.410170579Z" level=info msg="Start streaming server" Oct 9 00:42:17.410451 containerd[1605]: time="2024-10-09T00:42:17.410286259Z" level=info msg="containerd successfully booted in 0.039441s" Oct 9 00:42:17.411567 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 00:42:17.498721 tar[1591]: linux-arm64/LICENSE Oct 9 00:42:17.498721 tar[1591]: linux-arm64/README.md Oct 9 00:42:17.511896 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 00:42:18.179582 systemd-networkd[1246]: eth0: Gained IPv6LL Oct 9 00:42:18.182405 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 00:42:18.183860 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 00:42:18.195678 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 00:42:18.200783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:42:18.202735 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 00:42:18.211481 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 00:42:18.221463 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 00:42:18.221669 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 00:42:18.223886 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 00:42:18.225322 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 00:42:18.236632 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 00:42:18.244679 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 00:42:18.250918 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 00:42:18.251120 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 00:42:18.253441 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 00:42:18.266590 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 00:42:18.281669 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 00:42:18.283532 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 9 00:42:18.284523 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 00:42:18.671876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:42:18.673092 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 00:42:18.675378 systemd[1]: Startup finished in 5.414s (kernel) + 3.688s (userspace) = 9.102s. Oct 9 00:42:18.675594 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:42:19.123891 kubelet[1701]: E1009 00:42:19.123722 1701 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:42:19.126231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:42:19.126411 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:42:22.675199 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 00:42:22.690716 systemd[1]: Started sshd@0-10.0.0.37:22-10.0.0.1:39760.service - OpenSSH per-connection server daemon (10.0.0.1:39760). Oct 9 00:42:22.743984 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 39760 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:42:22.747340 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:42:22.762828 systemd-logind[1577]: New session 1 of user core. Oct 9 00:42:22.763706 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 00:42:22.771639 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 00:42:22.780613 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 00:42:22.783584 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 00:42:22.789804 (systemd)[1722]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 00:42:22.862923 systemd[1722]: Queued start job for default target default.target. Oct 9 00:42:22.863260 systemd[1722]: Created slice app.slice - User Application Slice. Oct 9 00:42:22.863284 systemd[1722]: Reached target paths.target - Paths. Oct 9 00:42:22.863295 systemd[1722]: Reached target timers.target - Timers. Oct 9 00:42:22.871512 systemd[1722]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 00:42:22.876891 systemd[1722]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 00:42:22.876947 systemd[1722]: Reached target sockets.target - Sockets. Oct 9 00:42:22.876958 systemd[1722]: Reached target basic.target - Basic System. Oct 9 00:42:22.876991 systemd[1722]: Reached target default.target - Main User Target. Oct 9 00:42:22.877012 systemd[1722]: Startup finished in 81ms. Oct 9 00:42:22.877330 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 00:42:22.878682 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 00:42:22.934684 systemd[1]: Started sshd@1-10.0.0.37:22-10.0.0.1:39776.service - OpenSSH per-connection server daemon (10.0.0.1:39776). Oct 9 00:42:22.974432 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 39776 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:42:22.975567 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:42:22.979078 systemd-logind[1577]: New session 2 of user core. Oct 9 00:42:22.988637 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 00:42:23.038268 sshd[1734]: pam_unix(sshd:session): session closed for user core Oct 9 00:42:23.049646 systemd[1]: Started sshd@2-10.0.0.37:22-10.0.0.1:39792.service - OpenSSH per-connection server daemon (10.0.0.1:39792). Oct 9 00:42:23.050287 systemd[1]: sshd@1-10.0.0.37:22-10.0.0.1:39776.service: Deactivated successfully. Oct 9 00:42:23.051968 systemd-logind[1577]: Session 2 logged out. Waiting for processes to exit. Oct 9 00:42:23.052048 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 00:42:23.053480 systemd-logind[1577]: Removed session 2. Oct 9 00:42:23.083780 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 39792 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:42:23.084960 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:42:23.089159 systemd-logind[1577]: New session 3 of user core. Oct 9 00:42:23.103791 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 00:42:23.151529 sshd[1739]: pam_unix(sshd:session): session closed for user core Oct 9 00:42:23.159814 systemd[1]: Started sshd@3-10.0.0.37:22-10.0.0.1:39796.service - OpenSSH per-connection server daemon (10.0.0.1:39796). Oct 9 00:42:23.160252 systemd[1]: sshd@2-10.0.0.37:22-10.0.0.1:39792.service: Deactivated successfully. Oct 9 00:42:23.161967 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Oct 9 00:42:23.162536 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 00:42:23.163488 systemd-logind[1577]: Removed session 3. Oct 9 00:42:23.187963 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 39796 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:42:23.189241 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:42:23.193739 systemd-logind[1577]: New session 4 of user core. Oct 9 00:42:23.199663 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 00:42:23.253473 sshd[1747]: pam_unix(sshd:session): session closed for user core Oct 9 00:42:23.262720 systemd[1]: Started sshd@4-10.0.0.37:22-10.0.0.1:39806.service - OpenSSH per-connection server daemon (10.0.0.1:39806). Oct 9 00:42:23.263140 systemd[1]: sshd@3-10.0.0.37:22-10.0.0.1:39796.service: Deactivated successfully. Oct 9 00:42:23.264759 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 00:42:23.265450 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Oct 9 00:42:23.267334 systemd-logind[1577]: Removed session 4. Oct 9 00:42:23.290558 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 39806 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:42:23.291793 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:42:23.295899 systemd-logind[1577]: New session 5 of user core. Oct 9 00:42:23.312755 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 00:42:23.374549 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 00:42:23.374846 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:42:23.388267 sudo[1762]: pam_unix(sudo:session): session closed for user root Oct 9 00:42:23.389881 sshd[1756]: pam_unix(sshd:session): session closed for user core Oct 9 00:42:23.402802 systemd[1]: Started sshd@5-10.0.0.37:22-10.0.0.1:39816.service - OpenSSH per-connection server daemon (10.0.0.1:39816). Oct 9 00:42:23.403214 systemd[1]: sshd@4-10.0.0.37:22-10.0.0.1:39806.service: Deactivated successfully. Oct 9 00:42:23.404848 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Oct 9 00:42:23.405542 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 00:42:23.407054 systemd-logind[1577]: Removed session 5. Oct 9 00:42:23.430323 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 39816 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:42:23.431477 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:42:23.434846 systemd-logind[1577]: New session 6 of user core. Oct 9 00:42:23.445663 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 00:42:23.496128 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 00:42:23.496401 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:42:23.499453 sudo[1772]: pam_unix(sudo:session): session closed for user root Oct 9 00:42:23.503718 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 00:42:23.503990 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:42:23.524888 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:42:23.548287 augenrules[1794]: No rules Oct 9 00:42:23.549061 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:42:23.549296 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:42:23.550310 sudo[1771]: pam_unix(sudo:session): session closed for user root Oct 9 00:42:23.552150 sshd[1764]: pam_unix(sshd:session): session closed for user core Oct 9 00:42:23.558695 systemd[1]: Started sshd@6-10.0.0.37:22-10.0.0.1:39832.service - OpenSSH per-connection server daemon (10.0.0.1:39832). Oct 9 00:42:23.559124 systemd[1]: sshd@5-10.0.0.37:22-10.0.0.1:39816.service: Deactivated successfully. Oct 9 00:42:23.561061 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 00:42:23.561845 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Oct 9 00:42:23.563204 systemd-logind[1577]: Removed session 6. Oct 9 00:42:23.587824 sshd[1801]: Accepted publickey for core from 10.0.0.1 port 39832 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:42:23.589112 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:42:23.593215 systemd-logind[1577]: New session 7 of user core. Oct 9 00:42:23.608849 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 00:42:23.659027 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 00:42:23.659308 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:42:23.980666 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 00:42:23.980862 (dockerd)[1832]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 00:42:24.250951 dockerd[1832]: time="2024-10-09T00:42:24.250831699Z" level=info msg="Starting up" Oct 9 00:42:24.495465 dockerd[1832]: time="2024-10-09T00:42:24.495408419Z" level=info msg="Loading containers: start." Oct 9 00:42:24.623484 kernel: Initializing XFRM netlink socket Oct 9 00:42:24.682188 systemd-networkd[1246]: docker0: Link UP Oct 9 00:42:24.706547 dockerd[1832]: time="2024-10-09T00:42:24.706503699Z" level=info msg="Loading containers: done." Oct 9 00:42:24.720111 dockerd[1832]: time="2024-10-09T00:42:24.720061739Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 00:42:24.720224 dockerd[1832]: time="2024-10-09T00:42:24.720153339Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 00:42:24.720285 dockerd[1832]: time="2024-10-09T00:42:24.720257099Z" level=info msg="Daemon has completed initialization" Oct 9 00:42:24.746857 dockerd[1832]: time="2024-10-09T00:42:24.746808339Z" level=info msg="API listen on /run/docker.sock" Oct 9 00:42:24.746999 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 00:42:25.327518 containerd[1605]: time="2024-10-09T00:42:25.327458739Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 00:42:26.115065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount69354722.mount: Deactivated successfully. Oct 9 00:42:27.229087 containerd[1605]: time="2024-10-09T00:42:27.229038859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:27.230162 containerd[1605]: time="2024-10-09T00:42:27.230117779Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=32286060" Oct 9 00:42:27.230927 containerd[1605]: time="2024-10-09T00:42:27.230895019Z" level=info msg="ImageCreate event name:\"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:27.234186 containerd[1605]: time="2024-10-09T00:42:27.234142139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:27.235158 containerd[1605]: time="2024-10-09T00:42:27.235119339Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"32282858\" in 1.90761752s" Oct 9 00:42:27.235158 containerd[1605]: time="2024-10-09T00:42:27.235155819Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\"" Oct 9 00:42:27.253050 containerd[1605]: time="2024-10-09T00:42:27.253014899Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 00:42:28.813543 containerd[1605]: time="2024-10-09T00:42:28.813477819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:28.814148 containerd[1605]: time="2024-10-09T00:42:28.814092139Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=29374206" Oct 9 00:42:28.815113 containerd[1605]: time="2024-10-09T00:42:28.815082419Z" level=info msg="ImageCreate event name:\"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:28.818523 containerd[1605]: time="2024-10-09T00:42:28.818482499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:28.819168 containerd[1605]: time="2024-10-09T00:42:28.819132939Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"30862018\" in 1.56608436s" Oct 9 00:42:28.819208 containerd[1605]: time="2024-10-09T00:42:28.819170099Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\"" Oct 9 00:42:28.838420 containerd[1605]: time="2024-10-09T00:42:28.838387379Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 00:42:29.376735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 00:42:29.390677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:42:29.484728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:42:29.488216 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:42:29.535230 kubelet[2120]: E1009 00:42:29.535133 2120 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:42:29.538309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:42:29.538505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:42:29.936034 containerd[1605]: time="2024-10-09T00:42:29.935989259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:29.937132 containerd[1605]: time="2024-10-09T00:42:29.937079939Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=15751219" Oct 9 00:42:29.937875 containerd[1605]: time="2024-10-09T00:42:29.937814059Z" level=info msg="ImageCreate event name:\"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:29.940658 containerd[1605]: time="2024-10-09T00:42:29.940629779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:29.941808 containerd[1605]: time="2024-10-09T00:42:29.941764739Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"17239049\" in 1.10334252s" Oct 9 00:42:29.941808 containerd[1605]: time="2024-10-09T00:42:29.941795819Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\"" Oct 9 00:42:29.959618 containerd[1605]: time="2024-10-09T00:42:29.959584499Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 00:42:31.013260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount871852006.mount: Deactivated successfully. Oct 9 00:42:31.420914 containerd[1605]: time="2024-10-09T00:42:31.420760139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:31.421958 containerd[1605]: time="2024-10-09T00:42:31.421900219Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=25254040" Oct 9 00:42:31.422813 containerd[1605]: time="2024-10-09T00:42:31.422760339Z" level=info msg="ImageCreate event name:\"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:31.424680 containerd[1605]: time="2024-10-09T00:42:31.424628579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:31.425213 containerd[1605]: time="2024-10-09T00:42:31.425187299Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"25253057\" in 1.46556736s" Oct 9 00:42:31.425473 containerd[1605]: time="2024-10-09T00:42:31.425276739Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\"" Oct 9 00:42:31.445244 containerd[1605]: time="2024-10-09T00:42:31.445207819Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 00:42:32.107254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657470710.mount: Deactivated successfully. Oct 9 00:42:33.745394 containerd[1605]: time="2024-10-09T00:42:33.745333379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:33.746805 containerd[1605]: time="2024-10-09T00:42:33.746749299Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 9 00:42:33.747698 containerd[1605]: time="2024-10-09T00:42:33.747646819Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:33.751035 containerd[1605]: time="2024-10-09T00:42:33.750995299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:33.752272 containerd[1605]: time="2024-10-09T00:42:33.752182379Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.30693364s" Oct 9 00:42:33.752272 containerd[1605]: time="2024-10-09T00:42:33.752224059Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 9 00:42:33.773455 containerd[1605]: time="2024-10-09T00:42:33.773223179Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 00:42:34.216568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1192046941.mount: Deactivated successfully. Oct 9 00:42:34.221266 containerd[1605]: time="2024-10-09T00:42:34.221208459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:34.222174 containerd[1605]: time="2024-10-09T00:42:34.222126099Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Oct 9 00:42:34.223136 containerd[1605]: time="2024-10-09T00:42:34.223099099Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:34.225740 containerd[1605]: time="2024-10-09T00:42:34.225706339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:34.226656 containerd[1605]: time="2024-10-09T00:42:34.226627379Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 453.36648ms" Oct 9 00:42:34.226719 containerd[1605]: time="2024-10-09T00:42:34.226656299Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 9 00:42:34.245348 containerd[1605]: time="2024-10-09T00:42:34.245269339Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 00:42:34.775208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611429792.mount: Deactivated successfully. Oct 9 00:42:36.865040 containerd[1605]: time="2024-10-09T00:42:36.864969899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:36.865663 containerd[1605]: time="2024-10-09T00:42:36.865610739Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Oct 9 00:42:36.866451 containerd[1605]: time="2024-10-09T00:42:36.866391939Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:36.869415 containerd[1605]: time="2024-10-09T00:42:36.869363379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:42:36.870777 containerd[1605]: time="2024-10-09T00:42:36.870709939Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.62540404s" Oct 9 00:42:36.870777 containerd[1605]: time="2024-10-09T00:42:36.870742339Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Oct 9 00:42:39.788779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 00:42:39.798698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:42:39.894980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:42:39.897739 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:42:39.940413 kubelet[2347]: E1009 00:42:39.940359 2347 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:42:39.943044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:42:39.943174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:42:43.654178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:42:43.665636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:42:43.684132 systemd[1]: Reloading requested from client PID 2364 ('systemctl') (unit session-7.scope)... Oct 9 00:42:43.684267 systemd[1]: Reloading... Oct 9 00:42:43.741448 zram_generator::config[2406]: No configuration found. Oct 9 00:42:43.855400 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:42:43.902818 systemd[1]: Reloading finished in 218 ms. Oct 9 00:42:43.940213 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 00:42:43.940274 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 00:42:43.940641 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:42:43.942681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:42:44.035587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:42:44.041218 (kubelet)[2461]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 00:42:44.080919 kubelet[2461]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:42:44.080919 kubelet[2461]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 00:42:44.080919 kubelet[2461]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:42:44.083544 kubelet[2461]: I1009 00:42:44.083479 2461 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 00:42:44.774561 kubelet[2461]: I1009 00:42:44.774524 2461 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 00:42:44.774561 kubelet[2461]: I1009 00:42:44.774554 2461 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 00:42:44.774777 kubelet[2461]: I1009 00:42:44.774761 2461 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 00:42:44.798847 kubelet[2461]: I1009 00:42:44.798811 2461 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 00:42:44.803907 kubelet[2461]: E1009 00:42:44.803685 2461 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:44.809001 kubelet[2461]: I1009 00:42:44.808978 2461 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 00:42:44.809344 kubelet[2461]: I1009 00:42:44.809319 2461 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 00:42:44.809520 kubelet[2461]: I1009 00:42:44.809497 2461 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 00:42:44.809597 kubelet[2461]: I1009 00:42:44.809522 2461 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 00:42:44.809597 kubelet[2461]: I1009 00:42:44.809532 2461 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 00:42:44.810147 kubelet[2461]: I1009 00:42:44.810120 2461 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:42:44.812194 kubelet[2461]: I1009 00:42:44.812174 2461 kubelet.go:396] "Attempting to sync node with API server" Oct 9 00:42:44.812194 kubelet[2461]: I1009 00:42:44.812195 2461 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 00:42:44.812260 kubelet[2461]: I1009 00:42:44.812215 2461 kubelet.go:312] "Adding apiserver pod source" Oct 9 00:42:44.812260 kubelet[2461]: I1009 00:42:44.812234 2461 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 00:42:44.812783 kubelet[2461]: W1009 00:42:44.812692 2461 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:44.812783 kubelet[2461]: E1009 00:42:44.812746 2461 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:44.812903 kubelet[2461]: W1009 00:42:44.812817 2461 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:44.812903 kubelet[2461]: E1009 00:42:44.812841 2461 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:44.816115 kubelet[2461]: I1009 00:42:44.815032 2461 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 00:42:44.816115 kubelet[2461]: I1009 00:42:44.815600 2461 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 00:42:44.816115 kubelet[2461]: W1009 00:42:44.815764 2461 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 00:42:44.816721 kubelet[2461]: I1009 00:42:44.816701 2461 server.go:1256] "Started kubelet" Oct 9 00:42:44.817583 kubelet[2461]: I1009 00:42:44.817552 2461 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 00:42:44.817793 kubelet[2461]: I1009 00:42:44.817770 2461 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 00:42:44.817832 kubelet[2461]: I1009 00:42:44.817552 2461 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 00:42:44.818658 kubelet[2461]: I1009 00:42:44.818635 2461 server.go:461] "Adding debug handlers to kubelet server" Oct 9 00:42:44.819165 kubelet[2461]: I1009 00:42:44.819141 2461 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 00:42:44.820400 kubelet[2461]: E1009 00:42:44.820381 2461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:42:44.820530 kubelet[2461]: I1009 00:42:44.820519 2461 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 00:42:44.820672 kubelet[2461]: I1009 00:42:44.820653 2461 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 00:42:44.821769 kubelet[2461]: I1009 00:42:44.821751 2461 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 00:42:44.822044 kubelet[2461]: W1009 00:42:44.821990 2461 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:44.822044 kubelet[2461]: E1009 00:42:44.822039 2461 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:44.822222 kubelet[2461]: E1009 00:42:44.822207 2461 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="200ms" Oct 9 00:42:44.823271 kubelet[2461]: I1009 00:42:44.823251 2461 factory.go:221] Registration of the systemd container factory successfully Oct 9 00:42:44.823353 kubelet[2461]: I1009 00:42:44.823331 2461 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 00:42:44.823637 kubelet[2461]: E1009 00:42:44.823619 2461 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 00:42:44.824311 kubelet[2461]: I1009 00:42:44.824294 2461 factory.go:221] Registration of the containerd container factory successfully Oct 9 00:42:44.824418 kubelet[2461]: E1009 00:42:44.824317 2461 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.37:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.37:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fca2143d679e73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 00:42:44.816674419 +0000 UTC m=+0.772205841,LastTimestamp:2024-10-09 00:42:44.816674419 +0000 UTC m=+0.772205841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 00:42:44.832095 kubelet[2461]: I1009 00:42:44.832064 2461 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 00:42:44.832974 kubelet[2461]: I1009 00:42:44.832949 2461 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 00:42:44.832974 kubelet[2461]: I1009 00:42:44.832969 2461 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 00:42:44.833051 kubelet[2461]: I1009 00:42:44.832983 2461 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 00:42:44.833051 kubelet[2461]: E1009 00:42:44.833030 2461 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 00:42:44.838367 kubelet[2461]: W1009 00:42:44.838316 2461 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:44.838367 kubelet[2461]: E1009 00:42:44.838362 2461 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:44.841691 kubelet[2461]: I1009 00:42:44.841673 2461 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 00:42:44.841786 kubelet[2461]: I1009 00:42:44.841776 2461 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 00:42:44.841842 kubelet[2461]: I1009 00:42:44.841834 2461 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:42:44.905664 kubelet[2461]: I1009 00:42:44.905632 2461 policy_none.go:49] "None policy: Start" Oct 9 00:42:44.906670 kubelet[2461]: I1009 00:42:44.906642 2461 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 00:42:44.906720 kubelet[2461]: I1009 00:42:44.906712 2461 state_mem.go:35] "Initializing new in-memory state store" Oct 9 00:42:44.910729 kubelet[2461]: I1009 00:42:44.910660 2461 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 00:42:44.911530 kubelet[2461]: I1009 00:42:44.910902 2461 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 00:42:44.911938 kubelet[2461]: E1009 00:42:44.911918 2461 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 00:42:44.923688 kubelet[2461]: I1009 00:42:44.923664 2461 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:42:44.924126 kubelet[2461]: E1009 00:42:44.924105 2461 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Oct 9 00:42:44.933191 kubelet[2461]: I1009 00:42:44.933166 2461 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 00:42:44.934132 kubelet[2461]: I1009 00:42:44.934110 2461 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 00:42:44.935190 kubelet[2461]: I1009 00:42:44.935147 2461 topology_manager.go:215] "Topology Admit Handler" podUID="80ed8c98e7294477989278215e1bd5a2" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 00:42:45.022942 kubelet[2461]: E1009 00:42:45.022913 2461 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="400ms" Oct 9 00:42:45.123541 kubelet[2461]: I1009 00:42:45.123377 2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80ed8c98e7294477989278215e1bd5a2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"80ed8c98e7294477989278215e1bd5a2\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:42:45.123541 kubelet[2461]: I1009 00:42:45.123439 2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:42:45.123541 kubelet[2461]: I1009 00:42:45.123469 2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:42:45.123541 kubelet[2461]: I1009 00:42:45.123492 2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:42:45.123541 kubelet[2461]: I1009 00:42:45.123515 2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 00:42:45.123953 kubelet[2461]: I1009 00:42:45.123534 2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80ed8c98e7294477989278215e1bd5a2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"80ed8c98e7294477989278215e1bd5a2\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:42:45.123953 kubelet[2461]: I1009 00:42:45.123570 2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:42:45.123953 kubelet[2461]: I1009 00:42:45.123612 2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:42:45.123953 kubelet[2461]: I1009 00:42:45.123644 2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80ed8c98e7294477989278215e1bd5a2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"80ed8c98e7294477989278215e1bd5a2\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:42:45.125464 kubelet[2461]: I1009 00:42:45.125415 2461 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:42:45.125764 kubelet[2461]: E1009 00:42:45.125749 2461 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Oct 9 00:42:45.238820 kubelet[2461]: E1009 00:42:45.238752 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:45.239392 kubelet[2461]: E1009 00:42:45.239362 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:45.239697 containerd[1605]: time="2024-10-09T00:42:45.239645139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 9 00:42:45.239697 containerd[1605]: time="2024-10-09T00:42:45.239679699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 9 00:42:45.240030 kubelet[2461]: E1009 00:42:45.240010 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:45.240302 containerd[1605]: time="2024-10-09T00:42:45.240274379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:80ed8c98e7294477989278215e1bd5a2,Namespace:kube-system,Attempt:0,}" Oct 9 00:42:45.423546 kubelet[2461]: E1009 00:42:45.423410 2461 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="800ms" Oct 9 00:42:45.526887 kubelet[2461]: I1009 00:42:45.526838 2461 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:42:45.527180 kubelet[2461]: E1009 00:42:45.527151 2461 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Oct 9 00:42:45.687182 kubelet[2461]: W1009 00:42:45.687034 2461 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:45.687182 kubelet[2461]: E1009 00:42:45.687109 2461 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:45.687182 kubelet[2461]: W1009 00:42:45.687046 2461 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:45.687182 kubelet[2461]: E1009 00:42:45.687129 2461 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:45.728941 kubelet[2461]: W1009 00:42:45.728876 2461 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:45.728941 kubelet[2461]: E1009 00:42:45.728932 2461 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:45.738413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1180215822.mount: Deactivated successfully. Oct 9 00:42:45.742983 containerd[1605]: time="2024-10-09T00:42:45.742930739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:42:45.743748 containerd[1605]: time="2024-10-09T00:42:45.743700819Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 9 00:42:45.744458 containerd[1605]: time="2024-10-09T00:42:45.744415539Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:42:45.745515 containerd[1605]: time="2024-10-09T00:42:45.745423099Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:42:45.746335 containerd[1605]: time="2024-10-09T00:42:45.746294299Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 00:42:45.746860 containerd[1605]: time="2024-10-09T00:42:45.746821739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 00:42:45.747094 containerd[1605]: time="2024-10-09T00:42:45.747065419Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:42:45.749693 containerd[1605]: time="2024-10-09T00:42:45.749660859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 00:42:45.750573 containerd[1605]: time="2024-10-09T00:42:45.750544619Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 510.21652ms" Oct 9 00:42:45.752174 containerd[1605]: time="2024-10-09T00:42:45.752140939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 512.4128ms" Oct 9 00:42:45.755536 containerd[1605]: time="2024-10-09T00:42:45.755498539Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 515.76624ms" Oct 9 00:42:45.908115 containerd[1605]: time="2024-10-09T00:42:45.907408819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:42:45.908115 containerd[1605]: time="2024-10-09T00:42:45.907829739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:42:45.908115 containerd[1605]: time="2024-10-09T00:42:45.907846779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:42:45.908115 containerd[1605]: time="2024-10-09T00:42:45.907929859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:42:45.910982 containerd[1605]: time="2024-10-09T00:42:45.910823499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:42:45.910982 containerd[1605]: time="2024-10-09T00:42:45.910892939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:42:45.910982 containerd[1605]: time="2024-10-09T00:42:45.910908299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:42:45.911111 containerd[1605]: time="2024-10-09T00:42:45.910995299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:42:45.911748 containerd[1605]: time="2024-10-09T00:42:45.911470219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:42:45.911748 containerd[1605]: time="2024-10-09T00:42:45.911521179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:42:45.911748 containerd[1605]: time="2024-10-09T00:42:45.911536259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:42:45.911748 containerd[1605]: time="2024-10-09T00:42:45.911620019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:42:45.957065 containerd[1605]: time="2024-10-09T00:42:45.956955299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef968c8bcd3a5b51ae9d7486b48581e972db87c95b0b66e4fe11a3cea21cf54c\"" Oct 9 00:42:45.958977 kubelet[2461]: E1009 00:42:45.958903 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:45.962343 containerd[1605]: time="2024-10-09T00:42:45.962273859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"4417883b92678fddde7ae1a090972470d5270320b925ddebf9269d76ff70e451\"" Oct 9 00:42:45.962909 kubelet[2461]: E1009 00:42:45.962889 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:45.963837 containerd[1605]: time="2024-10-09T00:42:45.963804379Z" level=info msg="CreateContainer within sandbox \"ef968c8bcd3a5b51ae9d7486b48581e972db87c95b0b66e4fe11a3cea21cf54c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 00:42:45.965295 containerd[1605]: time="2024-10-09T00:42:45.965150899Z" level=info msg="CreateContainer within sandbox \"4417883b92678fddde7ae1a090972470d5270320b925ddebf9269d76ff70e451\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 00:42:45.977587 containerd[1605]: time="2024-10-09T00:42:45.977550659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:80ed8c98e7294477989278215e1bd5a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3530dfc1e3c598a9960f7ed414be30b918fde2ac2be5dc4b018b32fb177d113\"" Oct 9 00:42:45.978232 kubelet[2461]: E1009 00:42:45.978212 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:45.980130 containerd[1605]: time="2024-10-09T00:42:45.980064699Z" level=info msg="CreateContainer within sandbox \"a3530dfc1e3c598a9960f7ed414be30b918fde2ac2be5dc4b018b32fb177d113\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 00:42:45.981951 containerd[1605]: time="2024-10-09T00:42:45.981910339Z" level=info msg="CreateContainer within sandbox \"ef968c8bcd3a5b51ae9d7486b48581e972db87c95b0b66e4fe11a3cea21cf54c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"06d3c0f288881dade354ee680ef26935cd6131a59341746e244b0f91013053c8\"" Oct 9 00:42:45.984123 containerd[1605]: time="2024-10-09T00:42:45.984090539Z" level=info msg="CreateContainer within sandbox \"4417883b92678fddde7ae1a090972470d5270320b925ddebf9269d76ff70e451\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f048d6a9d17ed07feb512b118be900bb24e295f9e0bf936c9466b5de9c67199\"" Oct 9 00:42:45.984318 containerd[1605]: time="2024-10-09T00:42:45.984287459Z" level=info msg="StartContainer for \"06d3c0f288881dade354ee680ef26935cd6131a59341746e244b0f91013053c8\"" Oct 9 00:42:45.991355 containerd[1605]: time="2024-10-09T00:42:45.991325699Z" level=info msg="StartContainer for \"9f048d6a9d17ed07feb512b118be900bb24e295f9e0bf936c9466b5de9c67199\"" Oct 9 00:42:45.994911 containerd[1605]: time="2024-10-09T00:42:45.994862659Z" level=info msg="CreateContainer within sandbox \"a3530dfc1e3c598a9960f7ed414be30b918fde2ac2be5dc4b018b32fb177d113\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1c8c2e17a40e1aa2ea22122cec9cac172ccbe45717f49f6ad0ca9f5e9e435948\"" Oct 9 00:42:45.997266 containerd[1605]: time="2024-10-09T00:42:45.996993619Z" level=info msg="StartContainer for \"1c8c2e17a40e1aa2ea22122cec9cac172ccbe45717f49f6ad0ca9f5e9e435948\"" Oct 9 00:42:46.094988 containerd[1605]: time="2024-10-09T00:42:46.090200579Z" level=info msg="StartContainer for \"9f048d6a9d17ed07feb512b118be900bb24e295f9e0bf936c9466b5de9c67199\" returns successfully" Oct 9 00:42:46.094988 containerd[1605]: time="2024-10-09T00:42:46.090349859Z" level=info msg="StartContainer for \"06d3c0f288881dade354ee680ef26935cd6131a59341746e244b0f91013053c8\" returns successfully" Oct 9 00:42:46.094988 containerd[1605]: time="2024-10-09T00:42:46.090377459Z" level=info msg="StartContainer for \"1c8c2e17a40e1aa2ea22122cec9cac172ccbe45717f49f6ad0ca9f5e9e435948\" returns successfully" Oct 9 00:42:46.230519 kubelet[2461]: E1009 00:42:46.224226 2461 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="1.6s" Oct 9 00:42:46.231783 kubelet[2461]: W1009 00:42:46.231740 2461 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:46.231861 kubelet[2461]: E1009 00:42:46.231792 2461 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Oct 9 00:42:46.329077 kubelet[2461]: I1009 00:42:46.329046 2461 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:42:46.851648 kubelet[2461]: E1009 00:42:46.851617 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:46.854103 kubelet[2461]: E1009 00:42:46.854075 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:46.855962 kubelet[2461]: E1009 00:42:46.855942 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:47.832719 kubelet[2461]: E1009 00:42:47.832672 2461 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 00:42:47.859507 kubelet[2461]: E1009 00:42:47.858352 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:47.859507 kubelet[2461]: E1009 00:42:47.858372 2461 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:47.867464 kubelet[2461]: I1009 00:42:47.867413 2461 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 00:42:47.876927 kubelet[2461]: E1009 00:42:47.875876 2461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:42:47.977580 kubelet[2461]: E1009 00:42:47.977533 2461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:42:48.078266 kubelet[2461]: E1009 00:42:48.078221 2461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:42:48.179054 kubelet[2461]: E1009 00:42:48.178943 2461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:42:48.279693 kubelet[2461]: E1009 00:42:48.279651 2461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:42:48.380391 kubelet[2461]: E1009 00:42:48.380353 2461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:42:48.481073 kubelet[2461]: E1009 00:42:48.481043 2461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:42:48.581607 kubelet[2461]: E1009 00:42:48.581568 2461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:42:48.682552 kubelet[2461]: E1009 00:42:48.682512 2461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:42:48.783715 kubelet[2461]: E1009 00:42:48.783355 2461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 9 00:42:49.819026 kubelet[2461]: I1009 00:42:49.818991 2461 apiserver.go:52] "Watching apiserver" Oct 9 00:42:49.822433 kubelet[2461]: I1009 00:42:49.822399 2461 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 00:42:50.675878 systemd[1]: Reloading requested from client PID 2741 ('systemctl') (unit session-7.scope)... Oct 9 00:42:50.675896 systemd[1]: Reloading... Oct 9 00:42:50.742469 zram_generator::config[2781]: No configuration found. Oct 9 00:42:50.829903 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:42:50.884583 systemd[1]: Reloading finished in 208 ms. Oct 9 00:42:50.910526 kubelet[2461]: I1009 00:42:50.910470 2461 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 00:42:50.910572 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:42:50.927460 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 00:42:50.927765 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:42:50.935726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:42:51.018977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:42:51.022720 (kubelet)[2832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 00:42:51.066363 kubelet[2832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:42:51.066363 kubelet[2832]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 00:42:51.066363 kubelet[2832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 00:42:51.066745 kubelet[2832]: I1009 00:42:51.066419 2832 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 00:42:51.071014 kubelet[2832]: I1009 00:42:51.070693 2832 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 00:42:51.071014 kubelet[2832]: I1009 00:42:51.070717 2832 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 00:42:51.071014 kubelet[2832]: I1009 00:42:51.070914 2832 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 00:42:51.073188 kubelet[2832]: I1009 00:42:51.072823 2832 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 00:42:51.075301 kubelet[2832]: I1009 00:42:51.075264 2832 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 00:42:51.081900 kubelet[2832]: I1009 00:42:51.081877 2832 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 00:42:51.082380 kubelet[2832]: I1009 00:42:51.082360 2832 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 00:42:51.082768 kubelet[2832]: I1009 00:42:51.082741 2832 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 00:42:51.083383 kubelet[2832]: I1009 00:42:51.082873 2832 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 00:42:51.083383 kubelet[2832]: I1009 00:42:51.082887 2832 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 00:42:51.083383 kubelet[2832]: I1009 00:42:51.082923 2832 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:42:51.083383 kubelet[2832]: I1009 00:42:51.083019 2832 kubelet.go:396] "Attempting to sync node with API server" Oct 9 00:42:51.083383 kubelet[2832]: I1009 00:42:51.083035 2832 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 00:42:51.083383 kubelet[2832]: I1009 00:42:51.083055 2832 kubelet.go:312] "Adding apiserver pod source" Oct 9 00:42:51.083383 kubelet[2832]: I1009 00:42:51.083075 2832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 00:42:51.094706 kubelet[2832]: I1009 00:42:51.085989 2832 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 00:42:51.094706 kubelet[2832]: I1009 00:42:51.086194 2832 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 00:42:51.094706 kubelet[2832]: I1009 00:42:51.086674 2832 server.go:1256] "Started kubelet" Oct 9 00:42:51.094706 kubelet[2832]: I1009 00:42:51.087034 2832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 00:42:51.094706 kubelet[2832]: I1009 00:42:51.087262 2832 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 00:42:51.094706 kubelet[2832]: I1009 00:42:51.087350 2832 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 00:42:51.094706 kubelet[2832]: I1009 00:42:51.088089 2832 server.go:461] "Adding debug handlers to kubelet server" Oct 9 00:42:51.094706 kubelet[2832]: I1009 00:42:51.094564 2832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 00:42:51.099422 sudo[2847]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 9 00:42:51.099828 sudo[2847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 9 00:42:51.104083 kubelet[2832]: I1009 00:42:51.103716 2832 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 00:42:51.106553 kubelet[2832]: I1009 00:42:51.106420 2832 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 00:42:51.108653 kubelet[2832]: I1009 00:42:51.108602 2832 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 00:42:51.110404 kubelet[2832]: E1009 00:42:51.110359 2832 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 00:42:51.112121 kubelet[2832]: I1009 00:42:51.111684 2832 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 00:42:51.112368 kubelet[2832]: I1009 00:42:51.112162 2832 factory.go:221] Registration of the containerd container factory successfully Oct 9 00:42:51.112368 kubelet[2832]: I1009 00:42:51.112173 2832 factory.go:221] Registration of the systemd container factory successfully Oct 9 00:42:51.113452 kubelet[2832]: I1009 00:42:51.113213 2832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 00:42:51.114553 kubelet[2832]: I1009 00:42:51.114535 2832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 00:42:51.114930 kubelet[2832]: I1009 00:42:51.114630 2832 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 00:42:51.114930 kubelet[2832]: I1009 00:42:51.114652 2832 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 00:42:51.114930 kubelet[2832]: E1009 00:42:51.114699 2832 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 00:42:51.156992 kubelet[2832]: I1009 00:42:51.156948 2832 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 00:42:51.156992 kubelet[2832]: I1009 00:42:51.156974 2832 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 00:42:51.156992 kubelet[2832]: I1009 00:42:51.156992 2832 state_mem.go:36] "Initialized new in-memory state store" Oct 9 00:42:51.157170 kubelet[2832]: I1009 00:42:51.157125 2832 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 00:42:51.157170 kubelet[2832]: I1009 00:42:51.157144 2832 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 00:42:51.157170 kubelet[2832]: I1009 00:42:51.157150 2832 policy_none.go:49] "None policy: Start" Oct 9 00:42:51.157736 kubelet[2832]: I1009 00:42:51.157716 2832 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 00:42:51.157799 kubelet[2832]: I1009 00:42:51.157762 2832 state_mem.go:35] "Initializing new in-memory state store" Oct 9 00:42:51.157976 kubelet[2832]: I1009 00:42:51.157940 2832 state_mem.go:75] "Updated machine memory state" Oct 9 00:42:51.159424 kubelet[2832]: I1009 00:42:51.159029 2832 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 00:42:51.159424 kubelet[2832]: I1009 00:42:51.159240 2832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 00:42:51.208328 kubelet[2832]: I1009 00:42:51.208226 2832 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 00:42:51.215088 kubelet[2832]: I1009 00:42:51.214964 2832 topology_manager.go:215] "Topology Admit Handler" podUID="80ed8c98e7294477989278215e1bd5a2" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 00:42:51.215694 kubelet[2832]: I1009 00:42:51.215648 2832 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 00:42:51.215770 kubelet[2832]: I1009 00:42:51.215725 2832 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 00:42:51.216491 kubelet[2832]: I1009 00:42:51.215037 2832 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 9 00:42:51.216491 kubelet[2832]: I1009 00:42:51.216485 2832 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 00:42:51.408460 kubelet[2832]: I1009 00:42:51.408416 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:42:51.408460 kubelet[2832]: I1009 00:42:51.408469 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 00:42:51.408593 kubelet[2832]: I1009 00:42:51.408493 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80ed8c98e7294477989278215e1bd5a2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"80ed8c98e7294477989278215e1bd5a2\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:42:51.408593 kubelet[2832]: I1009 00:42:51.408528 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80ed8c98e7294477989278215e1bd5a2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"80ed8c98e7294477989278215e1bd5a2\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:42:51.408593 kubelet[2832]: I1009 00:42:51.408548 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:42:51.408593 kubelet[2832]: I1009 00:42:51.408568 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:42:51.408593 kubelet[2832]: I1009 00:42:51.408585 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80ed8c98e7294477989278215e1bd5a2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"80ed8c98e7294477989278215e1bd5a2\") " pod="kube-system/kube-apiserver-localhost" Oct 9 00:42:51.408710 kubelet[2832]: I1009 00:42:51.408603 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:42:51.408710 kubelet[2832]: I1009 00:42:51.408622 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 00:42:51.523458 kubelet[2832]: E1009 00:42:51.523354 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:51.524490 kubelet[2832]: E1009 00:42:51.523765 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:51.527987 kubelet[2832]: E1009 00:42:51.527960 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:51.538113 sudo[2847]: pam_unix(sudo:session): session closed for user root Oct 9 00:42:52.084481 kubelet[2832]: I1009 00:42:52.084441 2832 apiserver.go:52] "Watching apiserver" Oct 9 00:42:52.113445 kubelet[2832]: I1009 00:42:52.113389 2832 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 00:42:52.131439 kubelet[2832]: E1009 00:42:52.131187 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:52.131439 kubelet[2832]: E1009 00:42:52.131297 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:52.139585 kubelet[2832]: E1009 00:42:52.139055 2832 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 00:42:52.139585 kubelet[2832]: E1009 00:42:52.139528 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:52.150479 kubelet[2832]: I1009 00:42:52.150415 2832 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.150375761 podStartE2EDuration="1.150375761s" podCreationTimestamp="2024-10-09 00:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:42:52.150131517 +0000 UTC m=+1.123067990" watchObservedRunningTime="2024-10-09 00:42:52.150375761 +0000 UTC m=+1.123312194" Oct 9 00:42:52.162520 kubelet[2832]: I1009 00:42:52.162495 2832 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.16245028 podStartE2EDuration="1.16245028s" podCreationTimestamp="2024-10-09 00:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:42:52.161630826 +0000 UTC m=+1.134567299" watchObservedRunningTime="2024-10-09 00:42:52.16245028 +0000 UTC m=+1.135386793" Oct 9 00:42:52.168239 kubelet[2832]: I1009 00:42:52.168197 2832 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.168157494 podStartE2EDuration="1.168157494s" podCreationTimestamp="2024-10-09 00:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:42:52.16790101 +0000 UTC m=+1.140837443" watchObservedRunningTime="2024-10-09 00:42:52.168157494 +0000 UTC m=+1.141093967" Oct 9 00:42:53.133357 kubelet[2832]: E1009 00:42:53.133165 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:53.133357 kubelet[2832]: E1009 00:42:53.133240 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:53.570672 sudo[1811]: pam_unix(sudo:session): session closed for user root Oct 9 00:42:53.572708 sshd[1801]: pam_unix(sshd:session): session closed for user core Oct 9 00:42:53.576628 systemd[1]: sshd@6-10.0.0.37:22-10.0.0.1:39832.service: Deactivated successfully. Oct 9 00:42:53.578365 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 00:42:53.578975 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Oct 9 00:42:53.580077 systemd-logind[1577]: Removed session 7. Oct 9 00:42:56.695563 kubelet[2832]: E1009 00:42:56.695535 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:57.139285 kubelet[2832]: E1009 00:42:57.139189 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:58.471576 kubelet[2832]: E1009 00:42:58.471550 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:42:59.141316 kubelet[2832]: E1009 00:42:59.141289 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:02.577087 update_engine[1582]: I20241009 00:43:02.576997 1582 update_attempter.cc:509] Updating boot flags... Oct 9 00:43:02.598449 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2916) Oct 9 00:43:02.624526 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2919) Oct 9 00:43:02.648517 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2919) Oct 9 00:43:03.005652 kubelet[2832]: E1009 00:43:03.005626 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:06.022400 kubelet[2832]: I1009 00:43:06.022223 2832 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 00:43:06.026293 containerd[1605]: time="2024-10-09T00:43:06.026230965Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 00:43:06.026655 kubelet[2832]: I1009 00:43:06.026498 2832 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 00:43:06.481686 kubelet[2832]: I1009 00:43:06.481641 2832 topology_manager.go:215] "Topology Admit Handler" podUID="51f8973b-fbff-44d0-8af7-3dde671a784e" podNamespace="kube-system" podName="kube-proxy-gb6bf" Oct 9 00:43:06.495901 kubelet[2832]: I1009 00:43:06.495848 2832 topology_manager.go:215] "Topology Admit Handler" podUID="d8ab3d93-f975-422f-9108-8a84e64a0447" podNamespace="kube-system" podName="cilium-z9vnm" Oct 9 00:43:06.613708 kubelet[2832]: I1009 00:43:06.613673 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-xtables-lock\") pod \"cilium-z9vnm\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " pod="kube-system/cilium-z9vnm" Oct 9 00:43:06.613819 kubelet[2832]: I1009 00:43:06.613721 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-host-proc-sys-net\") pod \"cilium-z9vnm\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " pod="kube-system/cilium-z9vnm" Oct 9 00:43:06.613819 kubelet[2832]: I1009 00:43:06.613748 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51f8973b-fbff-44d0-8af7-3dde671a784e-kube-proxy\") pod \"kube-proxy-gb6bf\" (UID: \"51f8973b-fbff-44d0-8af7-3dde671a784e\") " pod="kube-system/kube-proxy-gb6bf" Oct 9 00:43:06.613819 kubelet[2832]: I1009 00:43:06.613769 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-hostproc\") pod \"cilium-z9vnm\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " pod="kube-system/cilium-z9vnm" Oct 9 00:43:06.613819 kubelet[2832]: I1009 00:43:06.613789 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-etc-cni-netd\") pod \"cilium-z9vnm\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " pod="kube-system/cilium-z9vnm" Oct 9 00:43:06.613819 kubelet[2832]: I1009 00:43:06.613808 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-host-proc-sys-kernel\") pod \"cilium-z9vnm\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " pod="kube-system/cilium-z9vnm" Oct 9 00:43:06.613943 kubelet[2832]: I1009 00:43:06.613826 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-cilium-cgroup\") pod \"cilium-z9vnm\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " pod="kube-system/cilium-z9vnm" Oct 9 00:43:06.613943 kubelet[2832]: I1009 00:43:06.613846 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8ab3d93-f975-422f-9108-8a84e64a0447-cilium-config-path\") pod \"cilium-z9vnm\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " pod="kube-system/cilium-z9vnm" Oct 9 00:43:06.613943 kubelet[2832]: I1009 00:43:06.613867 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51f8973b-fbff-44d0-8af7-3dde671a784e-xtables-lock\") pod \"kube-proxy-gb6bf\" (UID: \"51f8973b-fbff-44d0-8af7-3dde671a784e\") " pod="kube-system/kube-proxy-gb6bf" Oct 9 00:43:06.613943 kubelet[2832]: I1009 00:43:06.613899 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4qbt\" (UniqueName: \"kubernetes.io/projected/51f8973b-fbff-44d0-8af7-3dde671a784e-kube-api-access-b4qbt\") pod \"kube-proxy-gb6bf\" (UID: \"51f8973b-fbff-44d0-8af7-3dde671a784e\") " pod="kube-system/kube-proxy-gb6bf" Oct 9 00:43:06.613943 kubelet[2832]: I1009 00:43:06.613918 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8ab3d93-f975-422f-9108-8a84e64a0447-hubble-tls\") pod \"cilium-z9vnm\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " pod="kube-system/cilium-z9vnm" Oct 9 00:43:06.614046 kubelet[2832]: I1009 00:43:06.613937 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8ab3d93-f975-422f-9108-8a84e64a0447-clustermesh-secrets\") pod \"cilium-z9vnm\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " pod="kube-system/cilium-z9vnm" Oct 9 00:43:06.614046 kubelet[2832]: I1009 00:43:06.613959 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51f8973b-fbff-44d0-8af7-3dde671a784e-lib-modules\") pod \"kube-proxy-gb6bf\" (UID: \"51f8973b-fbff-44d0-8af7-3dde671a784e\") " pod="kube-system/kube-proxy-gb6bf" Oct 9 00:43:06.614046 kubelet[2832]: I1009 00:43:06.613978 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-cilium-run\") pod \"cilium-z9vnm\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " pod="kube-system/cilium-z9vnm" Oct 9 00:43:06.614046 kubelet[2832]: I1009 00:43:06.613997 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-cni-path\") pod \"cilium-z9vnm\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " pod="kube-system/cilium-z9vnm" Oct 9 00:43:06.614046 kubelet[2832]: I1009 00:43:06.614015 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-lib-modules\") pod \"cilium-z9vnm\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " pod="kube-system/cilium-z9vnm" Oct 9 00:43:06.614046 kubelet[2832]: I1009 00:43:06.614036 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-bpf-maps\") pod \"cilium-z9vnm\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " pod="kube-system/cilium-z9vnm" Oct 9 00:43:06.614216 kubelet[2832]: I1009 00:43:06.614056 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8flhj\" (UniqueName: \"kubernetes.io/projected/d8ab3d93-f975-422f-9108-8a84e64a0447-kube-api-access-8flhj\") pod \"cilium-z9vnm\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " pod="kube-system/cilium-z9vnm" Oct 9 00:43:06.728315 kubelet[2832]: E1009 00:43:06.727870 2832 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 9 00:43:06.728315 kubelet[2832]: E1009 00:43:06.727904 2832 projected.go:200] Error preparing data for projected volume kube-api-access-8flhj for pod kube-system/cilium-z9vnm: configmap "kube-root-ca.crt" not found Oct 9 00:43:06.728315 kubelet[2832]: E1009 00:43:06.727985 2832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8ab3d93-f975-422f-9108-8a84e64a0447-kube-api-access-8flhj podName:d8ab3d93-f975-422f-9108-8a84e64a0447 nodeName:}" failed. No retries permitted until 2024-10-09 00:43:07.227964053 +0000 UTC m=+16.200900526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8flhj" (UniqueName: "kubernetes.io/projected/d8ab3d93-f975-422f-9108-8a84e64a0447-kube-api-access-8flhj") pod "cilium-z9vnm" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447") : configmap "kube-root-ca.crt" not found Oct 9 00:43:06.728315 kubelet[2832]: E1009 00:43:06.728208 2832 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 9 00:43:06.728315 kubelet[2832]: E1009 00:43:06.728233 2832 projected.go:200] Error preparing data for projected volume kube-api-access-b4qbt for pod kube-system/kube-proxy-gb6bf: configmap "kube-root-ca.crt" not found Oct 9 00:43:06.728315 kubelet[2832]: E1009 00:43:06.728267 2832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/51f8973b-fbff-44d0-8af7-3dde671a784e-kube-api-access-b4qbt podName:51f8973b-fbff-44d0-8af7-3dde671a784e nodeName:}" failed. No retries permitted until 2024-10-09 00:43:07.228254255 +0000 UTC m=+16.201190728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b4qbt" (UniqueName: "kubernetes.io/projected/51f8973b-fbff-44d0-8af7-3dde671a784e-kube-api-access-b4qbt") pod "kube-proxy-gb6bf" (UID: "51f8973b-fbff-44d0-8af7-3dde671a784e") : configmap "kube-root-ca.crt" not found Oct 9 00:43:07.073566 kubelet[2832]: I1009 00:43:07.072992 2832 topology_manager.go:215] "Topology Admit Handler" podUID="3888e6d3-cef2-410c-8a64-c0c3ce1214dd" podNamespace="kube-system" podName="cilium-operator-5cc964979-c7hnc" Oct 9 00:43:07.218036 kubelet[2832]: I1009 00:43:07.217989 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24gv5\" (UniqueName: \"kubernetes.io/projected/3888e6d3-cef2-410c-8a64-c0c3ce1214dd-kube-api-access-24gv5\") pod \"cilium-operator-5cc964979-c7hnc\" (UID: \"3888e6d3-cef2-410c-8a64-c0c3ce1214dd\") " pod="kube-system/cilium-operator-5cc964979-c7hnc" Oct 9 00:43:07.218036 kubelet[2832]: I1009 00:43:07.218041 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3888e6d3-cef2-410c-8a64-c0c3ce1214dd-cilium-config-path\") pod \"cilium-operator-5cc964979-c7hnc\" (UID: \"3888e6d3-cef2-410c-8a64-c0c3ce1214dd\") " pod="kube-system/cilium-operator-5cc964979-c7hnc" Oct 9 00:43:07.377547 kubelet[2832]: E1009 00:43:07.377204 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:07.377890 containerd[1605]: time="2024-10-09T00:43:07.377849958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-c7hnc,Uid:3888e6d3-cef2-410c-8a64-c0c3ce1214dd,Namespace:kube-system,Attempt:0,}" Oct 9 00:43:07.385285 kubelet[2832]: E1009 00:43:07.385165 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:07.385669 containerd[1605]: time="2024-10-09T00:43:07.385637407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gb6bf,Uid:51f8973b-fbff-44d0-8af7-3dde671a784e,Namespace:kube-system,Attempt:0,}" Oct 9 00:43:07.405493 kubelet[2832]: E1009 00:43:07.404560 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:07.405687 containerd[1605]: time="2024-10-09T00:43:07.405048889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z9vnm,Uid:d8ab3d93-f975-422f-9108-8a84e64a0447,Namespace:kube-system,Attempt:0,}" Oct 9 00:43:07.409641 containerd[1605]: time="2024-10-09T00:43:07.409528317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:43:07.409641 containerd[1605]: time="2024-10-09T00:43:07.409616157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:43:07.409641 containerd[1605]: time="2024-10-09T00:43:07.409628957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:43:07.410039 containerd[1605]: time="2024-10-09T00:43:07.409715278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:43:07.415906 containerd[1605]: time="2024-10-09T00:43:07.415625515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:43:07.415906 containerd[1605]: time="2024-10-09T00:43:07.415685515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:43:07.415906 containerd[1605]: time="2024-10-09T00:43:07.415699955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:43:07.417540 containerd[1605]: time="2024-10-09T00:43:07.417486447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:43:07.438765 containerd[1605]: time="2024-10-09T00:43:07.438676499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:43:07.438765 containerd[1605]: time="2024-10-09T00:43:07.438726620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:43:07.439894 containerd[1605]: time="2024-10-09T00:43:07.438772620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:43:07.439894 containerd[1605]: time="2024-10-09T00:43:07.438892421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:43:07.459735 containerd[1605]: time="2024-10-09T00:43:07.459701551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-c7hnc,Uid:3888e6d3-cef2-410c-8a64-c0c3ce1214dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"96e524fd9ee7325c7fa0bdad3c45d95ae0340a7603c369256f772b198d2ac647\"" Oct 9 00:43:07.460640 containerd[1605]: time="2024-10-09T00:43:07.460572476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gb6bf,Uid:51f8973b-fbff-44d0-8af7-3dde671a784e,Namespace:kube-system,Attempt:0,} returns sandbox id \"440ebbbf330ba336e466ad38e583b5c7527025a706e4534e4ec5cfc256f39551\"" Oct 9 00:43:07.462807 kubelet[2832]: E1009 00:43:07.462764 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:07.462955 kubelet[2832]: E1009 00:43:07.462890 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:07.472903 containerd[1605]: time="2024-10-09T00:43:07.472830033Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 9 00:43:07.475128 containerd[1605]: time="2024-10-09T00:43:07.475096807Z" level=info msg="CreateContainer within sandbox \"440ebbbf330ba336e466ad38e583b5c7527025a706e4534e4ec5cfc256f39551\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 00:43:07.485668 containerd[1605]: time="2024-10-09T00:43:07.485634113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z9vnm,Uid:d8ab3d93-f975-422f-9108-8a84e64a0447,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81\"" Oct 9 00:43:07.486209 kubelet[2832]: E1009 00:43:07.486188 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:07.508595 containerd[1605]: time="2024-10-09T00:43:07.508544137Z" level=info msg="CreateContainer within sandbox \"440ebbbf330ba336e466ad38e583b5c7527025a706e4534e4ec5cfc256f39551\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"91fe44076df7645966007779c7b3ccb06e6836f1a7834e0a9d77ca05ba3bd629\"" Oct 9 00:43:07.509668 containerd[1605]: time="2024-10-09T00:43:07.509639024Z" level=info msg="StartContainer for \"91fe44076df7645966007779c7b3ccb06e6836f1a7834e0a9d77ca05ba3bd629\"" Oct 9 00:43:07.563718 containerd[1605]: time="2024-10-09T00:43:07.563673362Z" level=info msg="StartContainer for \"91fe44076df7645966007779c7b3ccb06e6836f1a7834e0a9d77ca05ba3bd629\" returns successfully" Oct 9 00:43:08.155664 kubelet[2832]: E1009 00:43:08.154847 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:08.165284 kubelet[2832]: I1009 00:43:08.165227 2832 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gb6bf" podStartSLOduration=2.165190226 podStartE2EDuration="2.165190226s" podCreationTimestamp="2024-10-09 00:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:43:08.164739583 +0000 UTC m=+17.137676056" watchObservedRunningTime="2024-10-09 00:43:08.165190226 +0000 UTC m=+17.138126659" Oct 9 00:43:08.374122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1786751460.mount: Deactivated successfully. Oct 9 00:43:08.713180 containerd[1605]: time="2024-10-09T00:43:08.712665000Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:43:08.714135 containerd[1605]: time="2024-10-09T00:43:08.714082688Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138342" Oct 9 00:43:08.715018 containerd[1605]: time="2024-10-09T00:43:08.714964893Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:43:08.716930 containerd[1605]: time="2024-10-09T00:43:08.716903225Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.244034992s" Oct 9 00:43:08.717063 containerd[1605]: time="2024-10-09T00:43:08.717033466Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 9 00:43:08.723219 containerd[1605]: time="2024-10-09T00:43:08.723185222Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 9 00:43:08.724622 containerd[1605]: time="2024-10-09T00:43:08.724522830Z" level=info msg="CreateContainer within sandbox \"96e524fd9ee7325c7fa0bdad3c45d95ae0340a7603c369256f772b198d2ac647\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 9 00:43:08.811690 containerd[1605]: time="2024-10-09T00:43:08.811541381Z" level=info msg="CreateContainer within sandbox \"96e524fd9ee7325c7fa0bdad3c45d95ae0340a7603c369256f772b198d2ac647\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b\"" Oct 9 00:43:08.812467 containerd[1605]: time="2024-10-09T00:43:08.812235705Z" level=info msg="StartContainer for \"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b\"" Oct 9 00:43:08.832603 systemd[1]: run-containerd-runc-k8s.io-d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b-runc.OCIVn1.mount: Deactivated successfully. Oct 9 00:43:08.858035 containerd[1605]: time="2024-10-09T00:43:08.857921293Z" level=info msg="StartContainer for \"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b\" returns successfully" Oct 9 00:43:09.162448 kubelet[2832]: E1009 00:43:09.162375 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:10.161003 kubelet[2832]: E1009 00:43:10.160973 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:12.428947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount910915026.mount: Deactivated successfully. Oct 9 00:43:13.675415 containerd[1605]: time="2024-10-09T00:43:13.675364480Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:43:13.677197 containerd[1605]: time="2024-10-09T00:43:13.676955966Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651558" Oct 9 00:43:13.678462 containerd[1605]: time="2024-10-09T00:43:13.677845650Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 00:43:13.679430 containerd[1605]: time="2024-10-09T00:43:13.679396097Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.956174515s" Oct 9 00:43:13.679477 containerd[1605]: time="2024-10-09T00:43:13.679460937Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 9 00:43:13.685437 containerd[1605]: time="2024-10-09T00:43:13.685381682Z" level=info msg="CreateContainer within sandbox \"7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 9 00:43:13.698689 containerd[1605]: time="2024-10-09T00:43:13.698646699Z" level=info msg="CreateContainer within sandbox \"7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b\"" Oct 9 00:43:13.699483 containerd[1605]: time="2024-10-09T00:43:13.699457982Z" level=info msg="StartContainer for \"764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b\"" Oct 9 00:43:13.700329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2771725731.mount: Deactivated successfully. Oct 9 00:43:13.748526 containerd[1605]: time="2024-10-09T00:43:13.748458710Z" level=info msg="StartContainer for \"764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b\" returns successfully" Oct 9 00:43:14.119472 containerd[1605]: time="2024-10-09T00:43:14.101524026Z" level=info msg="shim disconnected" id=764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b namespace=k8s.io Oct 9 00:43:14.119472 containerd[1605]: time="2024-10-09T00:43:14.119402737Z" level=warning msg="cleaning up after shim disconnected" id=764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b namespace=k8s.io Oct 9 00:43:14.119472 containerd[1605]: time="2024-10-09T00:43:14.119417417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:43:14.130621 containerd[1605]: time="2024-10-09T00:43:14.130583662Z" level=warning msg="cleanup warnings time=\"2024-10-09T00:43:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 00:43:14.173417 kubelet[2832]: E1009 00:43:14.173390 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:14.177494 containerd[1605]: time="2024-10-09T00:43:14.177148647Z" level=info msg="CreateContainer within sandbox \"7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 9 00:43:14.204182 kubelet[2832]: I1009 00:43:14.204138 2832 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-c7hnc" podStartSLOduration=5.957363707 podStartE2EDuration="7.204098795s" podCreationTimestamp="2024-10-09 00:43:07 +0000 UTC" firstStartedPulling="2024-10-09 00:43:07.471853347 +0000 UTC m=+16.444789820" lastFinishedPulling="2024-10-09 00:43:08.718588435 +0000 UTC m=+17.691524908" observedRunningTime="2024-10-09 00:43:09.177631305 +0000 UTC m=+18.150567738" watchObservedRunningTime="2024-10-09 00:43:14.204098795 +0000 UTC m=+23.177035268" Oct 9 00:43:14.237381 containerd[1605]: time="2024-10-09T00:43:14.237328767Z" level=info msg="CreateContainer within sandbox \"7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2\"" Oct 9 00:43:14.240807 containerd[1605]: time="2024-10-09T00:43:14.240753381Z" level=info msg="StartContainer for \"195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2\"" Oct 9 00:43:14.301238 containerd[1605]: time="2024-10-09T00:43:14.301180862Z" level=info msg="StartContainer for \"195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2\" returns successfully" Oct 9 00:43:14.312599 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 00:43:14.313075 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:43:14.313142 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:43:14.319740 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:43:14.358654 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:43:14.360633 containerd[1605]: time="2024-10-09T00:43:14.360573098Z" level=info msg="shim disconnected" id=195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2 namespace=k8s.io Oct 9 00:43:14.360633 containerd[1605]: time="2024-10-09T00:43:14.360626698Z" level=warning msg="cleaning up after shim disconnected" id=195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2 namespace=k8s.io Oct 9 00:43:14.360633 containerd[1605]: time="2024-10-09T00:43:14.360635539Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:43:14.697171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b-rootfs.mount: Deactivated successfully. Oct 9 00:43:15.176629 kubelet[2832]: E1009 00:43:15.176597 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:15.178791 containerd[1605]: time="2024-10-09T00:43:15.178676435Z" level=info msg="CreateContainer within sandbox \"7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 9 00:43:15.203410 containerd[1605]: time="2024-10-09T00:43:15.203323327Z" level=info msg="CreateContainer within sandbox \"7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1\"" Oct 9 00:43:15.203850 containerd[1605]: time="2024-10-09T00:43:15.203815129Z" level=info msg="StartContainer for \"8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1\"" Oct 9 00:43:15.251853 containerd[1605]: time="2024-10-09T00:43:15.251802148Z" level=info msg="StartContainer for \"8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1\" returns successfully" Oct 9 00:43:15.291844 containerd[1605]: time="2024-10-09T00:43:15.291669737Z" level=info msg="shim disconnected" id=8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1 namespace=k8s.io Oct 9 00:43:15.291844 containerd[1605]: time="2024-10-09T00:43:15.291718098Z" level=warning msg="cleaning up after shim disconnected" id=8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1 namespace=k8s.io Oct 9 00:43:15.291844 containerd[1605]: time="2024-10-09T00:43:15.291725458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:43:15.697143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1-rootfs.mount: Deactivated successfully. Oct 9 00:43:16.179466 kubelet[2832]: E1009 00:43:16.179245 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:16.184009 containerd[1605]: time="2024-10-09T00:43:16.183959790Z" level=info msg="CreateContainer within sandbox \"7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 9 00:43:16.213241 containerd[1605]: time="2024-10-09T00:43:16.213185612Z" level=info msg="CreateContainer within sandbox \"7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60\"" Oct 9 00:43:16.214166 containerd[1605]: time="2024-10-09T00:43:16.214062255Z" level=info msg="StartContainer for \"7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60\"" Oct 9 00:43:16.265753 containerd[1605]: time="2024-10-09T00:43:16.265700596Z" level=info msg="StartContainer for \"7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60\" returns successfully" Oct 9 00:43:16.281236 containerd[1605]: time="2024-10-09T00:43:16.281175850Z" level=info msg="shim disconnected" id=7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60 namespace=k8s.io Oct 9 00:43:16.281236 containerd[1605]: time="2024-10-09T00:43:16.281226490Z" level=warning msg="cleaning up after shim disconnected" id=7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60 namespace=k8s.io Oct 9 00:43:16.281236 containerd[1605]: time="2024-10-09T00:43:16.281235690Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:43:16.697173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60-rootfs.mount: Deactivated successfully. Oct 9 00:43:17.183453 kubelet[2832]: E1009 00:43:17.183271 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:17.185979 containerd[1605]: time="2024-10-09T00:43:17.185686220Z" level=info msg="CreateContainer within sandbox \"7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 9 00:43:17.197092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2320590847.mount: Deactivated successfully. Oct 9 00:43:17.199480 containerd[1605]: time="2024-10-09T00:43:17.199441185Z" level=info msg="CreateContainer within sandbox \"7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02\"" Oct 9 00:43:17.200461 containerd[1605]: time="2024-10-09T00:43:17.199854866Z" level=info msg="StartContainer for \"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02\"" Oct 9 00:43:17.257989 containerd[1605]: time="2024-10-09T00:43:17.257780456Z" level=info msg="StartContainer for \"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02\" returns successfully" Oct 9 00:43:17.399847 kubelet[2832]: I1009 00:43:17.399657 2832 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 00:43:17.428713 kubelet[2832]: I1009 00:43:17.428672 2832 topology_manager.go:215] "Topology Admit Handler" podUID="d765804a-ec82-4a81-a49f-0cb9c5e51020" podNamespace="kube-system" podName="coredns-76f75df574-sndrv" Oct 9 00:43:17.428935 kubelet[2832]: I1009 00:43:17.428861 2832 topology_manager.go:215] "Topology Admit Handler" podUID="286d7fdb-9e94-4f7e-8e36-1f14c9863d0b" podNamespace="kube-system" podName="coredns-76f75df574-pzz74" Oct 9 00:43:17.591444 kubelet[2832]: I1009 00:43:17.590655 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d765804a-ec82-4a81-a49f-0cb9c5e51020-config-volume\") pod \"coredns-76f75df574-sndrv\" (UID: \"d765804a-ec82-4a81-a49f-0cb9c5e51020\") " pod="kube-system/coredns-76f75df574-sndrv" Oct 9 00:43:17.591444 kubelet[2832]: I1009 00:43:17.590706 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/286d7fdb-9e94-4f7e-8e36-1f14c9863d0b-config-volume\") pod \"coredns-76f75df574-pzz74\" (UID: \"286d7fdb-9e94-4f7e-8e36-1f14c9863d0b\") " pod="kube-system/coredns-76f75df574-pzz74" Oct 9 00:43:17.591444 kubelet[2832]: I1009 00:43:17.590732 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frrp9\" (UniqueName: \"kubernetes.io/projected/d765804a-ec82-4a81-a49f-0cb9c5e51020-kube-api-access-frrp9\") pod \"coredns-76f75df574-sndrv\" (UID: \"d765804a-ec82-4a81-a49f-0cb9c5e51020\") " pod="kube-system/coredns-76f75df574-sndrv" Oct 9 00:43:17.591444 kubelet[2832]: I1009 00:43:17.590753 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlnds\" (UniqueName: \"kubernetes.io/projected/286d7fdb-9e94-4f7e-8e36-1f14c9863d0b-kube-api-access-vlnds\") pod \"coredns-76f75df574-pzz74\" (UID: \"286d7fdb-9e94-4f7e-8e36-1f14c9863d0b\") " pod="kube-system/coredns-76f75df574-pzz74" Oct 9 00:43:17.733480 kubelet[2832]: E1009 00:43:17.733411 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:17.733687 kubelet[2832]: E1009 00:43:17.733665 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:17.734347 containerd[1605]: time="2024-10-09T00:43:17.734300061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sndrv,Uid:d765804a-ec82-4a81-a49f-0cb9c5e51020,Namespace:kube-system,Attempt:0,}" Oct 9 00:43:17.735334 containerd[1605]: time="2024-10-09T00:43:17.734586222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pzz74,Uid:286d7fdb-9e94-4f7e-8e36-1f14c9863d0b,Namespace:kube-system,Attempt:0,}" Oct 9 00:43:18.189019 kubelet[2832]: E1009 00:43:18.188957 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:18.207149 kubelet[2832]: I1009 00:43:18.206708 2832 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-z9vnm" podStartSLOduration=6.013475312 podStartE2EDuration="12.206663131s" podCreationTimestamp="2024-10-09 00:43:06 +0000 UTC" firstStartedPulling="2024-10-09 00:43:07.486585599 +0000 UTC m=+16.459522072" lastFinishedPulling="2024-10-09 00:43:13.679773418 +0000 UTC m=+22.652709891" observedRunningTime="2024-10-09 00:43:18.205835528 +0000 UTC m=+27.178772001" watchObservedRunningTime="2024-10-09 00:43:18.206663131 +0000 UTC m=+27.179599604" Oct 9 00:43:19.190771 kubelet[2832]: E1009 00:43:19.190742 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:19.476453 systemd-networkd[1246]: cilium_host: Link UP Oct 9 00:43:19.476763 systemd-networkd[1246]: cilium_net: Link UP Oct 9 00:43:19.476923 systemd-networkd[1246]: cilium_net: Gained carrier Oct 9 00:43:19.477113 systemd-networkd[1246]: cilium_host: Gained carrier Oct 9 00:43:19.563120 systemd-networkd[1246]: cilium_vxlan: Link UP Oct 9 00:43:19.563125 systemd-networkd[1246]: cilium_vxlan: Gained carrier Oct 9 00:43:19.579527 systemd-networkd[1246]: cilium_net: Gained IPv6LL Oct 9 00:43:19.929471 kernel: NET: Registered PF_ALG protocol family Oct 9 00:43:19.964612 systemd-networkd[1246]: cilium_host: Gained IPv6LL Oct 9 00:43:20.168687 systemd[1]: Started sshd@7-10.0.0.37:22-10.0.0.1:42126.service - OpenSSH per-connection server daemon (10.0.0.1:42126). Oct 9 00:43:20.194399 kubelet[2832]: E1009 00:43:20.194370 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:20.202671 sshd[3812]: Accepted publickey for core from 10.0.0.1 port 42126 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:43:20.204639 sshd[3812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:43:20.211039 systemd-logind[1577]: New session 8 of user core. Oct 9 00:43:20.219780 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 00:43:20.351380 sshd[3812]: pam_unix(sshd:session): session closed for user core Oct 9 00:43:20.355751 systemd[1]: sshd@7-10.0.0.37:22-10.0.0.1:42126.service: Deactivated successfully. Oct 9 00:43:20.357684 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 00:43:20.359205 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Oct 9 00:43:20.360084 systemd-logind[1577]: Removed session 8. Oct 9 00:43:20.512779 systemd-networkd[1246]: lxc_health: Link UP Oct 9 00:43:20.521558 systemd-networkd[1246]: lxc_health: Gained carrier Oct 9 00:43:20.893136 systemd-networkd[1246]: lxc70cb1c0d14cd: Link UP Oct 9 00:43:20.901511 kernel: eth0: renamed from tmp9177b Oct 9 00:43:20.909163 systemd-networkd[1246]: lxc70cb1c0d14cd: Gained carrier Oct 9 00:43:20.919521 kernel: eth0: renamed from tmp72daa Oct 9 00:43:20.926862 systemd-networkd[1246]: tmp72daa: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 00:43:20.927980 systemd-networkd[1246]: tmp72daa: Cannot enable IPv6, ignoring: No such file or directory Oct 9 00:43:20.928029 systemd-networkd[1246]: tmp72daa: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Oct 9 00:43:20.928040 systemd-networkd[1246]: tmp72daa: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Oct 9 00:43:20.928050 systemd-networkd[1246]: tmp72daa: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Oct 9 00:43:20.928063 systemd-networkd[1246]: tmp72daa: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Oct 9 00:43:20.930227 systemd-networkd[1246]: lxc5956200fb0ae: Link UP Oct 9 00:43:20.931327 systemd-networkd[1246]: lxc5956200fb0ae: Gained carrier Oct 9 00:43:21.283606 systemd-networkd[1246]: cilium_vxlan: Gained IPv6LL Oct 9 00:43:21.409549 kubelet[2832]: E1009 00:43:21.409321 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:21.923557 systemd-networkd[1246]: lxc_health: Gained IPv6LL Oct 9 00:43:22.116534 systemd-networkd[1246]: lxc70cb1c0d14cd: Gained IPv6LL Oct 9 00:43:22.197532 kubelet[2832]: E1009 00:43:22.197280 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:22.691607 systemd-networkd[1246]: lxc5956200fb0ae: Gained IPv6LL Oct 9 00:43:24.398792 containerd[1605]: time="2024-10-09T00:43:24.398677748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:43:24.404543 containerd[1605]: time="2024-10-09T00:43:24.399525230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:43:24.404543 containerd[1605]: time="2024-10-09T00:43:24.399543430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:43:24.404543 containerd[1605]: time="2024-10-09T00:43:24.399622070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:43:24.410760 containerd[1605]: time="2024-10-09T00:43:24.410677133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:43:24.410760 containerd[1605]: time="2024-10-09T00:43:24.410741253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:43:24.410869 containerd[1605]: time="2024-10-09T00:43:24.410756093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:43:24.410912 containerd[1605]: time="2024-10-09T00:43:24.410855213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:43:24.424175 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:43:24.436103 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 00:43:24.443460 containerd[1605]: time="2024-10-09T00:43:24.443393361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-sndrv,Uid:d765804a-ec82-4a81-a49f-0cb9c5e51020,Namespace:kube-system,Attempt:0,} returns sandbox id \"9177bbc8dcb8d5a67669ee1358bb8fd1a85c148480b999b782051a46a8985cd6\"" Oct 9 00:43:24.444796 kubelet[2832]: E1009 00:43:24.444585 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:24.448672 containerd[1605]: time="2024-10-09T00:43:24.448641132Z" level=info msg="CreateContainer within sandbox \"9177bbc8dcb8d5a67669ee1358bb8fd1a85c148480b999b782051a46a8985cd6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 00:43:24.458805 containerd[1605]: time="2024-10-09T00:43:24.458774594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pzz74,Uid:286d7fdb-9e94-4f7e-8e36-1f14c9863d0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"72daada2d8f7162f058ec45d72dd9e6c7291f11fcad1ec37a77b29d86631aba7\"" Oct 9 00:43:24.459617 kubelet[2832]: E1009 00:43:24.459546 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:24.460886 containerd[1605]: time="2024-10-09T00:43:24.460856358Z" level=info msg="CreateContainer within sandbox \"9177bbc8dcb8d5a67669ee1358bb8fd1a85c148480b999b782051a46a8985cd6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"92355950e9d75c3836304ab9aed0e3ca370e6e2088641993417567b9d69e9397\"" Oct 9 00:43:24.461745 containerd[1605]: time="2024-10-09T00:43:24.461708040Z" level=info msg="StartContainer for \"92355950e9d75c3836304ab9aed0e3ca370e6e2088641993417567b9d69e9397\"" Oct 9 00:43:24.462399 containerd[1605]: time="2024-10-09T00:43:24.462372001Z" level=info msg="CreateContainer within sandbox \"72daada2d8f7162f058ec45d72dd9e6c7291f11fcad1ec37a77b29d86631aba7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 00:43:24.471623 containerd[1605]: time="2024-10-09T00:43:24.471585900Z" level=info msg="CreateContainer within sandbox \"72daada2d8f7162f058ec45d72dd9e6c7291f11fcad1ec37a77b29d86631aba7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0cdf31d7b69127928ac74c8e049134ba5cd61d32df11d5fcbb6fa834c25a7e7f\"" Oct 9 00:43:24.472296 containerd[1605]: time="2024-10-09T00:43:24.472271142Z" level=info msg="StartContainer for \"0cdf31d7b69127928ac74c8e049134ba5cd61d32df11d5fcbb6fa834c25a7e7f\"" Oct 9 00:43:24.526927 containerd[1605]: time="2024-10-09T00:43:24.526183214Z" level=info msg="StartContainer for \"92355950e9d75c3836304ab9aed0e3ca370e6e2088641993417567b9d69e9397\" returns successfully" Oct 9 00:43:24.526927 containerd[1605]: time="2024-10-09T00:43:24.526345055Z" level=info msg="StartContainer for \"0cdf31d7b69127928ac74c8e049134ba5cd61d32df11d5fcbb6fa834c25a7e7f\" returns successfully" Oct 9 00:43:25.208603 kubelet[2832]: E1009 00:43:25.208564 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:25.213070 kubelet[2832]: E1009 00:43:25.212891 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:25.232365 kubelet[2832]: I1009 00:43:25.232337 2832 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-sndrv" podStartSLOduration=18.232302301 podStartE2EDuration="18.232302301s" podCreationTimestamp="2024-10-09 00:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:43:25.231654979 +0000 UTC m=+34.204591452" watchObservedRunningTime="2024-10-09 00:43:25.232302301 +0000 UTC m=+34.205238774" Oct 9 00:43:25.258958 kubelet[2832]: I1009 00:43:25.258916 2832 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pzz74" podStartSLOduration=18.258863513 podStartE2EDuration="18.258863513s" podCreationTimestamp="2024-10-09 00:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:43:25.258860113 +0000 UTC m=+34.231796586" watchObservedRunningTime="2024-10-09 00:43:25.258863513 +0000 UTC m=+34.231799986" Oct 9 00:43:25.368669 systemd[1]: Started sshd@8-10.0.0.37:22-10.0.0.1:36244.service - OpenSSH per-connection server daemon (10.0.0.1:36244). Oct 9 00:43:25.399177 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 36244 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:43:25.400375 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:43:25.408065 systemd-logind[1577]: New session 9 of user core. Oct 9 00:43:25.413709 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 00:43:25.529980 sshd[4244]: pam_unix(sshd:session): session closed for user core Oct 9 00:43:25.533045 systemd[1]: sshd@8-10.0.0.37:22-10.0.0.1:36244.service: Deactivated successfully. Oct 9 00:43:25.535156 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 00:43:25.535174 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Oct 9 00:43:25.537333 systemd-logind[1577]: Removed session 9. Oct 9 00:43:26.214497 kubelet[2832]: E1009 00:43:26.214314 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:26.214497 kubelet[2832]: E1009 00:43:26.214392 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:27.215835 kubelet[2832]: E1009 00:43:27.215794 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:27.216199 kubelet[2832]: E1009 00:43:27.216157 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:43:30.549026 systemd[1]: Started sshd@9-10.0.0.37:22-10.0.0.1:36254.service - OpenSSH per-connection server daemon (10.0.0.1:36254). Oct 9 00:43:30.585659 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 36254 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:43:30.585385 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:43:30.593192 systemd-logind[1577]: New session 10 of user core. Oct 9 00:43:30.599669 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 00:43:30.739459 sshd[4265]: pam_unix(sshd:session): session closed for user core Oct 9 00:43:30.748657 systemd[1]: Started sshd@10-10.0.0.37:22-10.0.0.1:36266.service - OpenSSH per-connection server daemon (10.0.0.1:36266). Oct 9 00:43:30.749038 systemd[1]: sshd@9-10.0.0.37:22-10.0.0.1:36254.service: Deactivated successfully. Oct 9 00:43:30.752773 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 00:43:30.753244 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Oct 9 00:43:30.759077 systemd-logind[1577]: Removed session 10. Oct 9 00:43:30.788228 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 36266 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:43:30.789887 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:43:30.795351 systemd-logind[1577]: New session 11 of user core. Oct 9 00:43:30.799739 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 00:43:30.962386 sshd[4278]: pam_unix(sshd:session): session closed for user core Oct 9 00:43:30.969936 systemd[1]: Started sshd@11-10.0.0.37:22-10.0.0.1:36278.service - OpenSSH per-connection server daemon (10.0.0.1:36278). Oct 9 00:43:30.970313 systemd[1]: sshd@10-10.0.0.37:22-10.0.0.1:36266.service: Deactivated successfully. Oct 9 00:43:30.976450 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Oct 9 00:43:30.976545 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 00:43:30.982477 systemd-logind[1577]: Removed session 11. Oct 9 00:43:31.008513 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 36278 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:43:31.008950 sshd[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:43:31.012697 systemd-logind[1577]: New session 12 of user core. Oct 9 00:43:31.021738 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 00:43:31.134249 sshd[4291]: pam_unix(sshd:session): session closed for user core Oct 9 00:43:31.137394 systemd[1]: sshd@11-10.0.0.37:22-10.0.0.1:36278.service: Deactivated successfully. Oct 9 00:43:31.139382 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Oct 9 00:43:31.139494 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 00:43:31.141026 systemd-logind[1577]: Removed session 12. Oct 9 00:43:36.150677 systemd[1]: Started sshd@12-10.0.0.37:22-10.0.0.1:54724.service - OpenSSH per-connection server daemon (10.0.0.1:54724). Oct 9 00:43:36.179853 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 54724 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:43:36.181229 sshd[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:43:36.184818 systemd-logind[1577]: New session 13 of user core. Oct 9 00:43:36.194678 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 00:43:36.306806 sshd[4310]: pam_unix(sshd:session): session closed for user core Oct 9 00:43:36.310587 systemd[1]: sshd@12-10.0.0.37:22-10.0.0.1:54724.service: Deactivated successfully. Oct 9 00:43:36.312551 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Oct 9 00:43:36.312649 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 00:43:36.314488 systemd-logind[1577]: Removed session 13. Oct 9 00:43:41.317833 systemd[1]: Started sshd@13-10.0.0.37:22-10.0.0.1:54730.service - OpenSSH per-connection server daemon (10.0.0.1:54730). Oct 9 00:43:41.355285 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 54730 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:43:41.356833 sshd[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:43:41.361791 systemd-logind[1577]: New session 14 of user core. Oct 9 00:43:41.376768 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 00:43:41.495328 sshd[4327]: pam_unix(sshd:session): session closed for user core Oct 9 00:43:41.506684 systemd[1]: Started sshd@14-10.0.0.37:22-10.0.0.1:54742.service - OpenSSH per-connection server daemon (10.0.0.1:54742). Oct 9 00:43:41.507620 systemd[1]: sshd@13-10.0.0.37:22-10.0.0.1:54730.service: Deactivated successfully. Oct 9 00:43:41.510910 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 00:43:41.512316 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Oct 9 00:43:41.513979 systemd-logind[1577]: Removed session 14. Oct 9 00:43:41.539274 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 54742 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:43:41.540546 sshd[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:43:41.544478 systemd-logind[1577]: New session 15 of user core. Oct 9 00:43:41.555697 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 00:43:41.829213 sshd[4339]: pam_unix(sshd:session): session closed for user core Oct 9 00:43:41.836672 systemd[1]: Started sshd@15-10.0.0.37:22-10.0.0.1:54750.service - OpenSSH per-connection server daemon (10.0.0.1:54750). Oct 9 00:43:41.837037 systemd[1]: sshd@14-10.0.0.37:22-10.0.0.1:54742.service: Deactivated successfully. Oct 9 00:43:41.839854 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 00:43:41.841139 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Oct 9 00:43:41.842225 systemd-logind[1577]: Removed session 15. Oct 9 00:43:41.872492 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 54750 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:43:41.873759 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:43:41.878085 systemd-logind[1577]: New session 16 of user core. Oct 9 00:43:41.892716 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 00:43:43.185122 sshd[4352]: pam_unix(sshd:session): session closed for user core Oct 9 00:43:43.195799 systemd[1]: Started sshd@16-10.0.0.37:22-10.0.0.1:58408.service - OpenSSH per-connection server daemon (10.0.0.1:58408). Oct 9 00:43:43.196193 systemd[1]: sshd@15-10.0.0.37:22-10.0.0.1:54750.service: Deactivated successfully. Oct 9 00:43:43.202437 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 00:43:43.204396 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Oct 9 00:43:43.210107 systemd-logind[1577]: Removed session 16. Oct 9 00:43:43.236980 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 58408 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:43:43.238217 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:43:43.241913 systemd-logind[1577]: New session 17 of user core. Oct 9 00:43:43.257755 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 00:43:43.477111 sshd[4375]: pam_unix(sshd:session): session closed for user core Oct 9 00:43:43.487710 systemd[1]: Started sshd@17-10.0.0.37:22-10.0.0.1:58414.service - OpenSSH per-connection server daemon (10.0.0.1:58414). Oct 9 00:43:43.488105 systemd[1]: sshd@16-10.0.0.37:22-10.0.0.1:58408.service: Deactivated successfully. Oct 9 00:43:43.490169 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Oct 9 00:43:43.491148 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 00:43:43.493835 systemd-logind[1577]: Removed session 17. Oct 9 00:43:43.524492 sshd[4388]: Accepted publickey for core from 10.0.0.1 port 58414 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:43:43.526006 sshd[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:43:43.530335 systemd-logind[1577]: New session 18 of user core. Oct 9 00:43:43.538750 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 00:43:43.661159 sshd[4388]: pam_unix(sshd:session): session closed for user core Oct 9 00:43:43.664297 systemd[1]: sshd@17-10.0.0.37:22-10.0.0.1:58414.service: Deactivated successfully. Oct 9 00:43:43.666247 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 00:43:43.666344 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Oct 9 00:43:43.667459 systemd-logind[1577]: Removed session 18. Oct 9 00:43:48.676654 systemd[1]: Started sshd@18-10.0.0.37:22-10.0.0.1:58416.service - OpenSSH per-connection server daemon (10.0.0.1:58416). Oct 9 00:43:48.703935 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 58416 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:43:48.705113 sshd[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:43:48.708336 systemd-logind[1577]: New session 19 of user core. Oct 9 00:43:48.717642 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 00:43:48.821111 sshd[4409]: pam_unix(sshd:session): session closed for user core Oct 9 00:43:48.824830 systemd[1]: sshd@18-10.0.0.37:22-10.0.0.1:58416.service: Deactivated successfully. Oct 9 00:43:48.826895 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 00:43:48.827275 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Oct 9 00:43:48.828019 systemd-logind[1577]: Removed session 19. Oct 9 00:43:53.831640 systemd[1]: Started sshd@19-10.0.0.37:22-10.0.0.1:56894.service - OpenSSH per-connection server daemon (10.0.0.1:56894). Oct 9 00:43:53.859192 sshd[4426]: Accepted publickey for core from 10.0.0.1 port 56894 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:43:53.860264 sshd[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:43:53.863814 systemd-logind[1577]: New session 20 of user core. Oct 9 00:43:53.871634 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 00:43:53.977449 sshd[4426]: pam_unix(sshd:session): session closed for user core Oct 9 00:43:53.979961 systemd[1]: sshd@19-10.0.0.37:22-10.0.0.1:56894.service: Deactivated successfully. Oct 9 00:43:53.982581 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Oct 9 00:43:53.983209 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 00:43:53.984297 systemd-logind[1577]: Removed session 20. Oct 9 00:43:58.996655 systemd[1]: Started sshd@20-10.0.0.37:22-10.0.0.1:56906.service - OpenSSH per-connection server daemon (10.0.0.1:56906). Oct 9 00:43:59.025013 sshd[4441]: Accepted publickey for core from 10.0.0.1 port 56906 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:43:59.026173 sshd[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:43:59.029696 systemd-logind[1577]: New session 21 of user core. Oct 9 00:43:59.041722 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 00:43:59.152160 sshd[4441]: pam_unix(sshd:session): session closed for user core Oct 9 00:43:59.155291 systemd[1]: sshd@20-10.0.0.37:22-10.0.0.1:56906.service: Deactivated successfully. Oct 9 00:43:59.157772 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 00:43:59.157775 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. Oct 9 00:43:59.158956 systemd-logind[1577]: Removed session 21. Oct 9 00:44:04.166904 systemd[1]: Started sshd@21-10.0.0.37:22-10.0.0.1:38812.service - OpenSSH per-connection server daemon (10.0.0.1:38812). Oct 9 00:44:04.195074 sshd[4457]: Accepted publickey for core from 10.0.0.1 port 38812 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:44:04.196530 sshd[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:44:04.200499 systemd-logind[1577]: New session 22 of user core. Oct 9 00:44:04.209726 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 00:44:04.319676 sshd[4457]: pam_unix(sshd:session): session closed for user core Oct 9 00:44:04.330701 systemd[1]: Started sshd@22-10.0.0.37:22-10.0.0.1:38816.service - OpenSSH per-connection server daemon (10.0.0.1:38816). Oct 9 00:44:04.331105 systemd[1]: sshd@21-10.0.0.37:22-10.0.0.1:38812.service: Deactivated successfully. Oct 9 00:44:04.334048 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 00:44:04.334862 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. Oct 9 00:44:04.335952 systemd-logind[1577]: Removed session 22. Oct 9 00:44:04.360838 sshd[4469]: Accepted publickey for core from 10.0.0.1 port 38816 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:44:04.362563 sshd[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:44:04.367306 systemd-logind[1577]: New session 23 of user core. Oct 9 00:44:04.374765 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 00:44:06.286309 containerd[1605]: time="2024-10-09T00:44:06.286268168Z" level=info msg="StopContainer for \"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b\" with timeout 30 (s)" Oct 9 00:44:06.288842 containerd[1605]: time="2024-10-09T00:44:06.287034991Z" level=info msg="Stop container \"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b\" with signal terminated" Oct 9 00:44:06.323300 containerd[1605]: time="2024-10-09T00:44:06.323180248Z" level=info msg="StopContainer for \"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02\" with timeout 2 (s)" Oct 9 00:44:06.323604 containerd[1605]: time="2024-10-09T00:44:06.323534939Z" level=info msg="Stop container \"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02\" with signal terminated" Oct 9 00:44:06.323604 containerd[1605]: time="2024-10-09T00:44:06.323707344Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 00:44:06.328700 systemd-networkd[1246]: lxc_health: Link DOWN Oct 9 00:44:06.328707 systemd-networkd[1246]: lxc_health: Lost carrier Oct 9 00:44:06.331327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b-rootfs.mount: Deactivated successfully. Oct 9 00:44:06.336406 containerd[1605]: time="2024-10-09T00:44:06.336360394Z" level=info msg="shim disconnected" id=d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b namespace=k8s.io Oct 9 00:44:06.336543 containerd[1605]: time="2024-10-09T00:44:06.336409235Z" level=warning msg="cleaning up after shim disconnected" id=d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b namespace=k8s.io Oct 9 00:44:06.336543 containerd[1605]: time="2024-10-09T00:44:06.336418995Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:44:06.368974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02-rootfs.mount: Deactivated successfully. Oct 9 00:44:06.377275 containerd[1605]: time="2024-10-09T00:44:06.377209709Z" level=info msg="shim disconnected" id=7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02 namespace=k8s.io Oct 9 00:44:06.377275 containerd[1605]: time="2024-10-09T00:44:06.377267510Z" level=warning msg="cleaning up after shim disconnected" id=7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02 namespace=k8s.io Oct 9 00:44:06.377275 containerd[1605]: time="2024-10-09T00:44:06.377276431Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:44:06.387878 containerd[1605]: time="2024-10-09T00:44:06.387827179Z" level=warning msg="cleanup warnings time=\"2024-10-09T00:44:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 00:44:06.390864 containerd[1605]: time="2024-10-09T00:44:06.390821347Z" level=info msg="StopContainer for \"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02\" returns successfully" Oct 9 00:44:06.391759 containerd[1605]: time="2024-10-09T00:44:06.391735974Z" level=info msg="StopContainer for \"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b\" returns successfully" Oct 9 00:44:06.395388 containerd[1605]: time="2024-10-09T00:44:06.395340799Z" level=info msg="StopPodSandbox for \"96e524fd9ee7325c7fa0bdad3c45d95ae0340a7603c369256f772b198d2ac647\"" Oct 9 00:44:06.395478 containerd[1605]: time="2024-10-09T00:44:06.395388160Z" level=info msg="Container to stop \"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:44:06.398447 containerd[1605]: time="2024-10-09T00:44:06.396378829Z" level=info msg="StopPodSandbox for \"7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81\"" Oct 9 00:44:06.398447 containerd[1605]: time="2024-10-09T00:44:06.396457352Z" level=info msg="Container to stop \"764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:44:06.398447 containerd[1605]: time="2024-10-09T00:44:06.396470672Z" level=info msg="Container to stop \"195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:44:06.398447 containerd[1605]: time="2024-10-09T00:44:06.396479432Z" level=info msg="Container to stop \"7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:44:06.398447 containerd[1605]: time="2024-10-09T00:44:06.396488553Z" level=info msg="Container to stop \"8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:44:06.398447 containerd[1605]: time="2024-10-09T00:44:06.396496673Z" level=info msg="Container to stop \"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 00:44:06.399532 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81-shm.mount: Deactivated successfully. Oct 9 00:44:06.399696 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96e524fd9ee7325c7fa0bdad3c45d95ae0340a7603c369256f772b198d2ac647-shm.mount: Deactivated successfully. Oct 9 00:44:06.421753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81-rootfs.mount: Deactivated successfully. Oct 9 00:44:06.422654 containerd[1605]: time="2024-10-09T00:44:06.422602956Z" level=info msg="shim disconnected" id=7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81 namespace=k8s.io Oct 9 00:44:06.422654 containerd[1605]: time="2024-10-09T00:44:06.422653398Z" level=warning msg="cleaning up after shim disconnected" id=7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81 namespace=k8s.io Oct 9 00:44:06.422762 containerd[1605]: time="2024-10-09T00:44:06.422661158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:44:06.430401 containerd[1605]: time="2024-10-09T00:44:06.430326822Z" level=info msg="shim disconnected" id=96e524fd9ee7325c7fa0bdad3c45d95ae0340a7603c369256f772b198d2ac647 namespace=k8s.io Oct 9 00:44:06.430401 containerd[1605]: time="2024-10-09T00:44:06.430391984Z" level=warning msg="cleaning up after shim disconnected" id=96e524fd9ee7325c7fa0bdad3c45d95ae0340a7603c369256f772b198d2ac647 namespace=k8s.io Oct 9 00:44:06.430401 containerd[1605]: time="2024-10-09T00:44:06.430400545Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:44:06.434595 containerd[1605]: time="2024-10-09T00:44:06.434512385Z" level=info msg="TearDown network for sandbox \"7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81\" successfully" Oct 9 00:44:06.434595 containerd[1605]: time="2024-10-09T00:44:06.434543066Z" level=info msg="StopPodSandbox for \"7c6ff319dc96dd11de550aa9da0ee58990c3ac0578277ab86678e35392e32e81\" returns successfully" Oct 9 00:44:06.452413 containerd[1605]: time="2024-10-09T00:44:06.452356347Z" level=info msg="TearDown network for sandbox \"96e524fd9ee7325c7fa0bdad3c45d95ae0340a7603c369256f772b198d2ac647\" successfully" Oct 9 00:44:06.452413 containerd[1605]: time="2024-10-09T00:44:06.452391748Z" level=info msg="StopPodSandbox for \"96e524fd9ee7325c7fa0bdad3c45d95ae0340a7603c369256f772b198d2ac647\" returns successfully" Oct 9 00:44:06.550246 kubelet[2832]: I1009 00:44:06.549049 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-host-proc-sys-kernel\") pod \"d8ab3d93-f975-422f-9108-8a84e64a0447\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " Oct 9 00:44:06.550246 kubelet[2832]: I1009 00:44:06.549095 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-bpf-maps\") pod \"d8ab3d93-f975-422f-9108-8a84e64a0447\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " Oct 9 00:44:06.550246 kubelet[2832]: I1009 00:44:06.549142 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3888e6d3-cef2-410c-8a64-c0c3ce1214dd-cilium-config-path\") pod \"3888e6d3-cef2-410c-8a64-c0c3ce1214dd\" (UID: \"3888e6d3-cef2-410c-8a64-c0c3ce1214dd\") " Oct 9 00:44:06.550246 kubelet[2832]: I1009 00:44:06.549161 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-xtables-lock\") pod \"d8ab3d93-f975-422f-9108-8a84e64a0447\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " Oct 9 00:44:06.550246 kubelet[2832]: I1009 00:44:06.549182 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8ab3d93-f975-422f-9108-8a84e64a0447-clustermesh-secrets\") pod \"d8ab3d93-f975-422f-9108-8a84e64a0447\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " Oct 9 00:44:06.550246 kubelet[2832]: I1009 00:44:06.549197 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-cni-path\") pod \"d8ab3d93-f975-422f-9108-8a84e64a0447\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " Oct 9 00:44:06.550707 kubelet[2832]: I1009 00:44:06.549215 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-lib-modules\") pod \"d8ab3d93-f975-422f-9108-8a84e64a0447\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " Oct 9 00:44:06.550707 kubelet[2832]: I1009 00:44:06.549233 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-hostproc\") pod \"d8ab3d93-f975-422f-9108-8a84e64a0447\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " Oct 9 00:44:06.550707 kubelet[2832]: I1009 00:44:06.549254 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-etc-cni-netd\") pod \"d8ab3d93-f975-422f-9108-8a84e64a0447\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " Oct 9 00:44:06.550707 kubelet[2832]: I1009 00:44:06.549271 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-cilium-cgroup\") pod \"d8ab3d93-f975-422f-9108-8a84e64a0447\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " Oct 9 00:44:06.550707 kubelet[2832]: I1009 00:44:06.549292 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8flhj\" (UniqueName: \"kubernetes.io/projected/d8ab3d93-f975-422f-9108-8a84e64a0447-kube-api-access-8flhj\") pod \"d8ab3d93-f975-422f-9108-8a84e64a0447\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " Oct 9 00:44:06.550707 kubelet[2832]: I1009 00:44:06.549316 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-host-proc-sys-net\") pod \"d8ab3d93-f975-422f-9108-8a84e64a0447\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " Oct 9 00:44:06.550834 kubelet[2832]: I1009 00:44:06.549333 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-cilium-run\") pod \"d8ab3d93-f975-422f-9108-8a84e64a0447\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " Oct 9 00:44:06.550834 kubelet[2832]: I1009 00:44:06.549354 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8ab3d93-f975-422f-9108-8a84e64a0447-cilium-config-path\") pod \"d8ab3d93-f975-422f-9108-8a84e64a0447\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " Oct 9 00:44:06.550834 kubelet[2832]: I1009 00:44:06.549507 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8ab3d93-f975-422f-9108-8a84e64a0447-hubble-tls\") pod \"d8ab3d93-f975-422f-9108-8a84e64a0447\" (UID: \"d8ab3d93-f975-422f-9108-8a84e64a0447\") " Oct 9 00:44:06.550834 kubelet[2832]: I1009 00:44:06.549540 2832 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24gv5\" (UniqueName: \"kubernetes.io/projected/3888e6d3-cef2-410c-8a64-c0c3ce1214dd-kube-api-access-24gv5\") pod \"3888e6d3-cef2-410c-8a64-c0c3ce1214dd\" (UID: \"3888e6d3-cef2-410c-8a64-c0c3ce1214dd\") " Oct 9 00:44:06.553453 kubelet[2832]: I1009 00:44:06.553171 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d8ab3d93-f975-422f-9108-8a84e64a0447" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:44:06.553453 kubelet[2832]: I1009 00:44:06.553442 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d8ab3d93-f975-422f-9108-8a84e64a0447" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:44:06.553568 kubelet[2832]: I1009 00:44:06.553487 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d8ab3d93-f975-422f-9108-8a84e64a0447" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:44:06.553992 kubelet[2832]: I1009 00:44:06.553753 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d8ab3d93-f975-422f-9108-8a84e64a0447" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:44:06.554030 kubelet[2832]: I1009 00:44:06.554003 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d8ab3d93-f975-422f-9108-8a84e64a0447" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:44:06.558990 kubelet[2832]: I1009 00:44:06.558960 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3888e6d3-cef2-410c-8a64-c0c3ce1214dd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3888e6d3-cef2-410c-8a64-c0c3ce1214dd" (UID: "3888e6d3-cef2-410c-8a64-c0c3ce1214dd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 00:44:06.561162 kubelet[2832]: I1009 00:44:06.560927 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ab3d93-f975-422f-9108-8a84e64a0447-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d8ab3d93-f975-422f-9108-8a84e64a0447" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 00:44:06.561162 kubelet[2832]: I1009 00:44:06.560976 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d8ab3d93-f975-422f-9108-8a84e64a0447" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:44:06.561364 kubelet[2832]: I1009 00:44:06.561328 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3888e6d3-cef2-410c-8a64-c0c3ce1214dd-kube-api-access-24gv5" (OuterVolumeSpecName: "kube-api-access-24gv5") pod "3888e6d3-cef2-410c-8a64-c0c3ce1214dd" (UID: "3888e6d3-cef2-410c-8a64-c0c3ce1214dd"). InnerVolumeSpecName "kube-api-access-24gv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 00:44:06.561364 kubelet[2832]: I1009 00:44:06.561358 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-cni-path" (OuterVolumeSpecName: "cni-path") pod "d8ab3d93-f975-422f-9108-8a84e64a0447" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:44:06.561444 kubelet[2832]: I1009 00:44:06.561334 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8ab3d93-f975-422f-9108-8a84e64a0447-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d8ab3d93-f975-422f-9108-8a84e64a0447" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 00:44:06.561444 kubelet[2832]: I1009 00:44:06.561384 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d8ab3d93-f975-422f-9108-8a84e64a0447" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:44:06.561444 kubelet[2832]: I1009 00:44:06.561399 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-hostproc" (OuterVolumeSpecName: "hostproc") pod "d8ab3d93-f975-422f-9108-8a84e64a0447" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:44:06.561444 kubelet[2832]: I1009 00:44:06.561402 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d8ab3d93-f975-422f-9108-8a84e64a0447" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 00:44:06.563016 kubelet[2832]: I1009 00:44:06.562981 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ab3d93-f975-422f-9108-8a84e64a0447-kube-api-access-8flhj" (OuterVolumeSpecName: "kube-api-access-8flhj") pod "d8ab3d93-f975-422f-9108-8a84e64a0447" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447"). InnerVolumeSpecName "kube-api-access-8flhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 00:44:06.563461 kubelet[2832]: I1009 00:44:06.563419 2832 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ab3d93-f975-422f-9108-8a84e64a0447-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d8ab3d93-f975-422f-9108-8a84e64a0447" (UID: "d8ab3d93-f975-422f-9108-8a84e64a0447"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 00:44:06.650506 kubelet[2832]: I1009 00:44:06.650460 2832 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8ab3d93-f975-422f-9108-8a84e64a0447-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650506 kubelet[2832]: I1009 00:44:06.650492 2832 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650506 kubelet[2832]: I1009 00:44:06.650503 2832 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8ab3d93-f975-422f-9108-8a84e64a0447-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650506 kubelet[2832]: I1009 00:44:06.650514 2832 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-24gv5\" (UniqueName: \"kubernetes.io/projected/3888e6d3-cef2-410c-8a64-c0c3ce1214dd-kube-api-access-24gv5\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650714 kubelet[2832]: I1009 00:44:06.650524 2832 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650714 kubelet[2832]: I1009 00:44:06.650533 2832 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650714 kubelet[2832]: I1009 00:44:06.650542 2832 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8ab3d93-f975-422f-9108-8a84e64a0447-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650714 kubelet[2832]: I1009 00:44:06.650552 2832 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650714 kubelet[2832]: I1009 00:44:06.650567 2832 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650714 kubelet[2832]: I1009 00:44:06.650578 2832 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3888e6d3-cef2-410c-8a64-c0c3ce1214dd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650714 kubelet[2832]: I1009 00:44:06.650587 2832 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650714 kubelet[2832]: I1009 00:44:06.650596 2832 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650872 kubelet[2832]: I1009 00:44:06.650605 2832 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650872 kubelet[2832]: I1009 00:44:06.650613 2832 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650872 kubelet[2832]: I1009 00:44:06.650622 2832 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8ab3d93-f975-422f-9108-8a84e64a0447-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:06.650872 kubelet[2832]: I1009 00:44:06.650631 2832 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8flhj\" (UniqueName: \"kubernetes.io/projected/d8ab3d93-f975-422f-9108-8a84e64a0447-kube-api-access-8flhj\") on node \"localhost\" DevicePath \"\"" Oct 9 00:44:07.115592 kubelet[2832]: E1009 00:44:07.115511 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:44:07.299299 kubelet[2832]: I1009 00:44:07.299259 2832 scope.go:117] "RemoveContainer" containerID="7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02" Oct 9 00:44:07.301878 containerd[1605]: time="2024-10-09T00:44:07.301843364Z" level=info msg="RemoveContainer for \"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02\"" Oct 9 00:44:07.302060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96e524fd9ee7325c7fa0bdad3c45d95ae0340a7603c369256f772b198d2ac647-rootfs.mount: Deactivated successfully. Oct 9 00:44:07.302329 systemd[1]: var-lib-kubelet-pods-3888e6d3\x2dcef2\x2d410c\x2d8a64\x2dc0c3ce1214dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d24gv5.mount: Deactivated successfully. Oct 9 00:44:07.302491 systemd[1]: var-lib-kubelet-pods-d8ab3d93\x2df975\x2d422f\x2d9108\x2d8a84e64a0447-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8flhj.mount: Deactivated successfully. Oct 9 00:44:07.302711 systemd[1]: var-lib-kubelet-pods-d8ab3d93\x2df975\x2d422f\x2d9108\x2d8a84e64a0447-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 9 00:44:07.302860 systemd[1]: var-lib-kubelet-pods-d8ab3d93\x2df975\x2d422f\x2d9108\x2d8a84e64a0447-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 9 00:44:07.309575 containerd[1605]: time="2024-10-09T00:44:07.309531062Z" level=info msg="RemoveContainer for \"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02\" returns successfully" Oct 9 00:44:07.310246 kubelet[2832]: I1009 00:44:07.309985 2832 scope.go:117] "RemoveContainer" containerID="7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60" Oct 9 00:44:07.312808 containerd[1605]: time="2024-10-09T00:44:07.312517108Z" level=info msg="RemoveContainer for \"7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60\"" Oct 9 00:44:07.325478 containerd[1605]: time="2024-10-09T00:44:07.325413755Z" level=info msg="RemoveContainer for \"7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60\" returns successfully" Oct 9 00:44:07.326077 kubelet[2832]: I1009 00:44:07.326045 2832 scope.go:117] "RemoveContainer" containerID="8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1" Oct 9 00:44:07.327354 containerd[1605]: time="2024-10-09T00:44:07.327320649Z" level=info msg="RemoveContainer for \"8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1\"" Oct 9 00:44:07.334118 containerd[1605]: time="2024-10-09T00:44:07.334075601Z" level=info msg="RemoveContainer for \"8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1\" returns successfully" Oct 9 00:44:07.334311 kubelet[2832]: I1009 00:44:07.334280 2832 scope.go:117] "RemoveContainer" containerID="195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2" Oct 9 00:44:07.335467 containerd[1605]: time="2024-10-09T00:44:07.335438640Z" level=info msg="RemoveContainer for \"195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2\"" Oct 9 00:44:07.366349 containerd[1605]: time="2024-10-09T00:44:07.366249598Z" level=info msg="RemoveContainer for \"195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2\" returns successfully" Oct 9 00:44:07.366536 kubelet[2832]: I1009 00:44:07.366505 2832 scope.go:117] "RemoveContainer" containerID="764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b" Oct 9 00:44:07.367906 containerd[1605]: time="2024-10-09T00:44:07.367877684Z" level=info msg="RemoveContainer for \"764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b\"" Oct 9 00:44:07.384749 containerd[1605]: time="2024-10-09T00:44:07.384715244Z" level=info msg="RemoveContainer for \"764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b\" returns successfully" Oct 9 00:44:07.384910 kubelet[2832]: I1009 00:44:07.384890 2832 scope.go:117] "RemoveContainer" containerID="7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02" Oct 9 00:44:07.385114 containerd[1605]: time="2024-10-09T00:44:07.385073134Z" level=error msg="ContainerStatus for \"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02\": not found" Oct 9 00:44:07.387522 kubelet[2832]: E1009 00:44:07.387448 2832 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02\": not found" containerID="7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02" Oct 9 00:44:07.390777 kubelet[2832]: I1009 00:44:07.390647 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02"} err="failed to get container status \"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ea906ea6fbe35c150321f5550ed320d516b62198984c44432e3ab267ac54a02\": not found" Oct 9 00:44:07.390777 kubelet[2832]: I1009 00:44:07.390684 2832 scope.go:117] "RemoveContainer" containerID="7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60" Oct 9 00:44:07.390964 containerd[1605]: time="2024-10-09T00:44:07.390907580Z" level=error msg="ContainerStatus for \"7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60\": not found" Oct 9 00:44:07.391060 kubelet[2832]: E1009 00:44:07.391045 2832 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60\": not found" containerID="7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60" Oct 9 00:44:07.391105 kubelet[2832]: I1009 00:44:07.391073 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60"} err="failed to get container status \"7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f0b26631dff407afcc69033f95a0068b19a2b9603f1fb93be88d7cdf2f44b60\": not found" Oct 9 00:44:07.391105 kubelet[2832]: I1009 00:44:07.391083 2832 scope.go:117] "RemoveContainer" containerID="8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1" Oct 9 00:44:07.391279 containerd[1605]: time="2024-10-09T00:44:07.391238669Z" level=error msg="ContainerStatus for \"8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1\": not found" Oct 9 00:44:07.391543 kubelet[2832]: E1009 00:44:07.391413 2832 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1\": not found" containerID="8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1" Oct 9 00:44:07.391543 kubelet[2832]: I1009 00:44:07.391462 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1"} err="failed to get container status \"8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d7cb7b75ad8a385e19ae7d723bc5421dffe79dce3217c1196372d82d5ec78b1\": not found" Oct 9 00:44:07.391543 kubelet[2832]: I1009 00:44:07.391475 2832 scope.go:117] "RemoveContainer" containerID="195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2" Oct 9 00:44:07.391715 containerd[1605]: time="2024-10-09T00:44:07.391655121Z" level=error msg="ContainerStatus for \"195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2\": not found" Oct 9 00:44:07.391855 kubelet[2832]: E1009 00:44:07.391800 2832 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2\": not found" containerID="195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2" Oct 9 00:44:07.391961 kubelet[2832]: I1009 00:44:07.391917 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2"} err="failed to get container status \"195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2\": rpc error: code = NotFound desc = an error occurred when try to find container \"195574f44f7b179a92dd55459df2d0ffcf84bdb7f204a37b5e13f46765147fd2\": not found" Oct 9 00:44:07.391961 kubelet[2832]: I1009 00:44:07.391930 2832 scope.go:117] "RemoveContainer" containerID="764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b" Oct 9 00:44:07.392161 containerd[1605]: time="2024-10-09T00:44:07.392123095Z" level=error msg="ContainerStatus for \"764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b\": not found" Oct 9 00:44:07.392264 kubelet[2832]: E1009 00:44:07.392219 2832 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b\": not found" containerID="764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b" Oct 9 00:44:07.392264 kubelet[2832]: I1009 00:44:07.392246 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b"} err="failed to get container status \"764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b\": rpc error: code = NotFound desc = an error occurred when try to find container \"764574aaa89cfa63919b94604baf6db759474ee6c37b7d922af025e3b663ab7b\": not found" Oct 9 00:44:07.392264 kubelet[2832]: I1009 00:44:07.392255 2832 scope.go:117] "RemoveContainer" containerID="d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b" Oct 9 00:44:07.393065 containerd[1605]: time="2024-10-09T00:44:07.393041321Z" level=info msg="RemoveContainer for \"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b\"" Oct 9 00:44:07.395638 containerd[1605]: time="2024-10-09T00:44:07.395599954Z" level=info msg="RemoveContainer for \"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b\" returns successfully" Oct 9 00:44:07.395814 kubelet[2832]: I1009 00:44:07.395771 2832 scope.go:117] "RemoveContainer" containerID="d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b" Oct 9 00:44:07.396039 containerd[1605]: time="2024-10-09T00:44:07.395973324Z" level=error msg="ContainerStatus for \"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b\": not found" Oct 9 00:44:07.396190 kubelet[2832]: E1009 00:44:07.396163 2832 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b\": not found" containerID="d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b" Oct 9 00:44:07.396298 kubelet[2832]: I1009 00:44:07.396284 2832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b"} err="failed to get container status \"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d856a43d7dfe3b1660d5507664d51a9c4b76f5219aa51c43141f457622e1b33b\": not found" Oct 9 00:44:08.243937 sshd[4469]: pam_unix(sshd:session): session closed for user core Oct 9 00:44:08.255750 systemd[1]: Started sshd@23-10.0.0.37:22-10.0.0.1:38824.service - OpenSSH per-connection server daemon (10.0.0.1:38824). Oct 9 00:44:08.256130 systemd[1]: sshd@22-10.0.0.37:22-10.0.0.1:38816.service: Deactivated successfully. Oct 9 00:44:08.258894 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 00:44:08.259712 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. Oct 9 00:44:08.260809 systemd-logind[1577]: Removed session 23. Oct 9 00:44:08.284917 sshd[4637]: Accepted publickey for core from 10.0.0.1 port 38824 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:44:08.286073 sshd[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:44:08.289698 systemd-logind[1577]: New session 24 of user core. Oct 9 00:44:08.299769 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 00:44:09.117651 kubelet[2832]: I1009 00:44:09.117455 2832 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3888e6d3-cef2-410c-8a64-c0c3ce1214dd" path="/var/lib/kubelet/pods/3888e6d3-cef2-410c-8a64-c0c3ce1214dd/volumes" Oct 9 00:44:09.118889 kubelet[2832]: I1009 00:44:09.118870 2832 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d8ab3d93-f975-422f-9108-8a84e64a0447" path="/var/lib/kubelet/pods/d8ab3d93-f975-422f-9108-8a84e64a0447/volumes" Oct 9 00:44:09.185596 sshd[4637]: pam_unix(sshd:session): session closed for user core Oct 9 00:44:09.199808 systemd[1]: Started sshd@24-10.0.0.37:22-10.0.0.1:38832.service - OpenSSH per-connection server daemon (10.0.0.1:38832). Oct 9 00:44:09.200311 systemd[1]: sshd@23-10.0.0.37:22-10.0.0.1:38824.service: Deactivated successfully. Oct 9 00:44:09.202880 kubelet[2832]: I1009 00:44:09.202626 2832 topology_manager.go:215] "Topology Admit Handler" podUID="9584d9b1-b1d2-47ee-817a-9086bcd6e6ba" podNamespace="kube-system" podName="cilium-js6tz" Oct 9 00:44:09.202880 kubelet[2832]: E1009 00:44:09.202687 2832 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3888e6d3-cef2-410c-8a64-c0c3ce1214dd" containerName="cilium-operator" Oct 9 00:44:09.202880 kubelet[2832]: E1009 00:44:09.202699 2832 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8ab3d93-f975-422f-9108-8a84e64a0447" containerName="mount-bpf-fs" Oct 9 00:44:09.202880 kubelet[2832]: E1009 00:44:09.202706 2832 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8ab3d93-f975-422f-9108-8a84e64a0447" containerName="cilium-agent" Oct 9 00:44:09.202880 kubelet[2832]: E1009 00:44:09.202714 2832 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8ab3d93-f975-422f-9108-8a84e64a0447" containerName="mount-cgroup" Oct 9 00:44:09.202880 kubelet[2832]: E1009 00:44:09.202722 2832 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8ab3d93-f975-422f-9108-8a84e64a0447" containerName="apply-sysctl-overwrites" Oct 9 00:44:09.202880 kubelet[2832]: E1009 00:44:09.202729 2832 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8ab3d93-f975-422f-9108-8a84e64a0447" containerName="clean-cilium-state" Oct 9 00:44:09.207030 kubelet[2832]: I1009 00:44:09.206975 2832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3888e6d3-cef2-410c-8a64-c0c3ce1214dd" containerName="cilium-operator" Oct 9 00:44:09.207030 kubelet[2832]: I1009 00:44:09.207024 2832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ab3d93-f975-422f-9108-8a84e64a0447" containerName="cilium-agent" Oct 9 00:44:09.211496 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 00:44:09.218475 systemd-logind[1577]: Session 24 logged out. Waiting for processes to exit. Oct 9 00:44:09.225905 systemd-logind[1577]: Removed session 24. Oct 9 00:44:09.259369 sshd[4651]: Accepted publickey for core from 10.0.0.1 port 38832 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:44:09.260805 sshd[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:44:09.265394 systemd-logind[1577]: New session 25 of user core. Oct 9 00:44:09.276794 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 00:44:09.326594 sshd[4651]: pam_unix(sshd:session): session closed for user core Oct 9 00:44:09.338794 systemd[1]: Started sshd@25-10.0.0.37:22-10.0.0.1:38838.service - OpenSSH per-connection server daemon (10.0.0.1:38838). Oct 9 00:44:09.339174 systemd[1]: sshd@24-10.0.0.37:22-10.0.0.1:38832.service: Deactivated successfully. Oct 9 00:44:09.341787 systemd-logind[1577]: Session 25 logged out. Waiting for processes to exit. Oct 9 00:44:09.341998 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 00:44:09.344188 systemd-logind[1577]: Removed session 25. Oct 9 00:44:09.364243 kubelet[2832]: I1009 00:44:09.364114 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-cilium-config-path\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.364243 kubelet[2832]: I1009 00:44:09.364164 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-bpf-maps\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.364243 kubelet[2832]: I1009 00:44:09.364185 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-cni-path\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.364243 kubelet[2832]: I1009 00:44:09.364204 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-host-proc-sys-kernel\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.364243 kubelet[2832]: I1009 00:44:09.364223 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-hostproc\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.364420 kubelet[2832]: I1009 00:44:09.364285 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-xtables-lock\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.364420 kubelet[2832]: I1009 00:44:09.364321 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-clustermesh-secrets\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.364420 kubelet[2832]: I1009 00:44:09.364387 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-lib-modules\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.364420 kubelet[2832]: I1009 00:44:09.364419 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-host-proc-sys-net\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.364525 kubelet[2832]: I1009 00:44:09.364482 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-cilium-run\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.364525 kubelet[2832]: I1009 00:44:09.364508 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-cilium-cgroup\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.364525 kubelet[2832]: I1009 00:44:09.364526 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-etc-cni-netd\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.364601 kubelet[2832]: I1009 00:44:09.364550 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-hubble-tls\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.364601 kubelet[2832]: I1009 00:44:09.364570 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-cilium-ipsec-secrets\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.364643 kubelet[2832]: I1009 00:44:09.364607 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq7ff\" (UniqueName: \"kubernetes.io/projected/9584d9b1-b1d2-47ee-817a-9086bcd6e6ba-kube-api-access-mq7ff\") pod \"cilium-js6tz\" (UID: \"9584d9b1-b1d2-47ee-817a-9086bcd6e6ba\") " pod="kube-system/cilium-js6tz" Oct 9 00:44:09.367434 sshd[4660]: Accepted publickey for core from 10.0.0.1 port 38838 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 00:44:09.368616 sshd[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:44:09.372712 systemd-logind[1577]: New session 26 of user core. Oct 9 00:44:09.382689 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 00:44:09.512441 kubelet[2832]: E1009 00:44:09.512389 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:44:09.513065 containerd[1605]: time="2024-10-09T00:44:09.513030511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-js6tz,Uid:9584d9b1-b1d2-47ee-817a-9086bcd6e6ba,Namespace:kube-system,Attempt:0,}" Oct 9 00:44:09.535488 containerd[1605]: time="2024-10-09T00:44:09.535331434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 00:44:09.535651 containerd[1605]: time="2024-10-09T00:44:09.535455277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 00:44:09.535651 containerd[1605]: time="2024-10-09T00:44:09.535482198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:44:09.535721 containerd[1605]: time="2024-10-09T00:44:09.535575440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 00:44:09.568378 containerd[1605]: time="2024-10-09T00:44:09.568330645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-js6tz,Uid:9584d9b1-b1d2-47ee-817a-9086bcd6e6ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"9683fb0d1cf814a8f7641b9c649a902c56f66dfd63248f68d8c2bf1cf9974111\"" Oct 9 00:44:09.569409 kubelet[2832]: E1009 00:44:09.569181 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:44:09.572291 containerd[1605]: time="2024-10-09T00:44:09.572258231Z" level=info msg="CreateContainer within sandbox \"9683fb0d1cf814a8f7641b9c649a902c56f66dfd63248f68d8c2bf1cf9974111\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 9 00:44:09.581966 containerd[1605]: time="2024-10-09T00:44:09.581919972Z" level=info msg="CreateContainer within sandbox \"9683fb0d1cf814a8f7641b9c649a902c56f66dfd63248f68d8c2bf1cf9974111\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bda16c13e33838e7a0236dfc210e5309f613aa088bb1e4d0238e9c9fc0e78e31\"" Oct 9 00:44:09.583082 containerd[1605]: time="2024-10-09T00:44:09.582602350Z" level=info msg="StartContainer for \"bda16c13e33838e7a0236dfc210e5309f613aa088bb1e4d0238e9c9fc0e78e31\"" Oct 9 00:44:09.634000 containerd[1605]: time="2024-10-09T00:44:09.633883135Z" level=info msg="StartContainer for \"bda16c13e33838e7a0236dfc210e5309f613aa088bb1e4d0238e9c9fc0e78e31\" returns successfully" Oct 9 00:44:09.698858 containerd[1605]: time="2024-10-09T00:44:09.698801328Z" level=info msg="shim disconnected" id=bda16c13e33838e7a0236dfc210e5309f613aa088bb1e4d0238e9c9fc0e78e31 namespace=k8s.io Oct 9 00:44:09.698858 containerd[1605]: time="2024-10-09T00:44:09.698853250Z" level=warning msg="cleaning up after shim disconnected" id=bda16c13e33838e7a0236dfc210e5309f613aa088bb1e4d0238e9c9fc0e78e31 namespace=k8s.io Oct 9 00:44:09.698858 containerd[1605]: time="2024-10-09T00:44:09.698862050Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:44:10.303285 kubelet[2832]: E1009 00:44:10.303233 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:44:10.306326 containerd[1605]: time="2024-10-09T00:44:10.306289919Z" level=info msg="CreateContainer within sandbox \"9683fb0d1cf814a8f7641b9c649a902c56f66dfd63248f68d8c2bf1cf9974111\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 9 00:44:10.333696 containerd[1605]: time="2024-10-09T00:44:10.333645599Z" level=info msg="CreateContainer within sandbox \"9683fb0d1cf814a8f7641b9c649a902c56f66dfd63248f68d8c2bf1cf9974111\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6df5081ab0faba2a450ec04d73812e9f2847b0e421d36c94b0de87250caf0b82\"" Oct 9 00:44:10.336214 containerd[1605]: time="2024-10-09T00:44:10.335302642Z" level=info msg="StartContainer for \"6df5081ab0faba2a450ec04d73812e9f2847b0e421d36c94b0de87250caf0b82\"" Oct 9 00:44:10.375737 containerd[1605]: time="2024-10-09T00:44:10.375691585Z" level=info msg="StartContainer for \"6df5081ab0faba2a450ec04d73812e9f2847b0e421d36c94b0de87250caf0b82\" returns successfully" Oct 9 00:44:10.402241 containerd[1605]: time="2024-10-09T00:44:10.402186522Z" level=info msg="shim disconnected" id=6df5081ab0faba2a450ec04d73812e9f2847b0e421d36c94b0de87250caf0b82 namespace=k8s.io Oct 9 00:44:10.402241 containerd[1605]: time="2024-10-09T00:44:10.402236683Z" level=warning msg="cleaning up after shim disconnected" id=6df5081ab0faba2a450ec04d73812e9f2847b0e421d36c94b0de87250caf0b82 namespace=k8s.io Oct 9 00:44:10.402241 containerd[1605]: time="2024-10-09T00:44:10.402245643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:44:11.184758 kubelet[2832]: E1009 00:44:11.184725 2832 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 9 00:44:11.306707 kubelet[2832]: E1009 00:44:11.306672 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:44:11.308720 containerd[1605]: time="2024-10-09T00:44:11.308675196Z" level=info msg="CreateContainer within sandbox \"9683fb0d1cf814a8f7641b9c649a902c56f66dfd63248f68d8c2bf1cf9974111\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 9 00:44:11.321668 containerd[1605]: time="2024-10-09T00:44:11.321623767Z" level=info msg="CreateContainer within sandbox \"9683fb0d1cf814a8f7641b9c649a902c56f66dfd63248f68d8c2bf1cf9974111\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"018159fccc4089991e148b82d9d35212400dd525bc367a2b062a415a10810004\"" Oct 9 00:44:11.322785 containerd[1605]: time="2024-10-09T00:44:11.322089739Z" level=info msg="StartContainer for \"018159fccc4089991e148b82d9d35212400dd525bc367a2b062a415a10810004\"" Oct 9 00:44:11.375285 containerd[1605]: time="2024-10-09T00:44:11.375249061Z" level=info msg="StartContainer for \"018159fccc4089991e148b82d9d35212400dd525bc367a2b062a415a10810004\" returns successfully" Oct 9 00:44:11.424133 containerd[1605]: time="2024-10-09T00:44:11.424067952Z" level=info msg="shim disconnected" id=018159fccc4089991e148b82d9d35212400dd525bc367a2b062a415a10810004 namespace=k8s.io Oct 9 00:44:11.424133 containerd[1605]: time="2024-10-09T00:44:11.424124114Z" level=warning msg="cleaning up after shim disconnected" id=018159fccc4089991e148b82d9d35212400dd525bc367a2b062a415a10810004 namespace=k8s.io Oct 9 00:44:11.424133 containerd[1605]: time="2024-10-09T00:44:11.424132994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:44:11.470111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-018159fccc4089991e148b82d9d35212400dd525bc367a2b062a415a10810004-rootfs.mount: Deactivated successfully. Oct 9 00:44:12.311901 kubelet[2832]: E1009 00:44:12.311851 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:44:12.314332 containerd[1605]: time="2024-10-09T00:44:12.313854744Z" level=info msg="CreateContainer within sandbox \"9683fb0d1cf814a8f7641b9c649a902c56f66dfd63248f68d8c2bf1cf9974111\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 9 00:44:12.322917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2077822118.mount: Deactivated successfully. Oct 9 00:44:12.326603 containerd[1605]: time="2024-10-09T00:44:12.326555062Z" level=info msg="CreateContainer within sandbox \"9683fb0d1cf814a8f7641b9c649a902c56f66dfd63248f68d8c2bf1cf9974111\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7b6e36014d9a46c8190a071ce6cb671b59dbdcb6f7c3bbe1e4dae51ea3f4f730\"" Oct 9 00:44:12.327090 containerd[1605]: time="2024-10-09T00:44:12.327040354Z" level=info msg="StartContainer for \"7b6e36014d9a46c8190a071ce6cb671b59dbdcb6f7c3bbe1e4dae51ea3f4f730\"" Oct 9 00:44:12.371277 containerd[1605]: time="2024-10-09T00:44:12.371198096Z" level=info msg="StartContainer for \"7b6e36014d9a46c8190a071ce6cb671b59dbdcb6f7c3bbe1e4dae51ea3f4f730\" returns successfully" Oct 9 00:44:12.388210 containerd[1605]: time="2024-10-09T00:44:12.388154039Z" level=info msg="shim disconnected" id=7b6e36014d9a46c8190a071ce6cb671b59dbdcb6f7c3bbe1e4dae51ea3f4f730 namespace=k8s.io Oct 9 00:44:12.388210 containerd[1605]: time="2024-10-09T00:44:12.388204640Z" level=warning msg="cleaning up after shim disconnected" id=7b6e36014d9a46c8190a071ce6cb671b59dbdcb6f7c3bbe1e4dae51ea3f4f730 namespace=k8s.io Oct 9 00:44:12.388210 containerd[1605]: time="2024-10-09T00:44:12.388212961Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 00:44:12.470226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b6e36014d9a46c8190a071ce6cb671b59dbdcb6f7c3bbe1e4dae51ea3f4f730-rootfs.mount: Deactivated successfully. Oct 9 00:44:12.782951 kubelet[2832]: I1009 00:44:12.782910 2832 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-09T00:44:12Z","lastTransitionTime":"2024-10-09T00:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 9 00:44:13.331936 kubelet[2832]: E1009 00:44:13.331899 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:44:13.335860 containerd[1605]: time="2024-10-09T00:44:13.335812923Z" level=info msg="CreateContainer within sandbox \"9683fb0d1cf814a8f7641b9c649a902c56f66dfd63248f68d8c2bf1cf9974111\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 9 00:44:13.346000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3361423996.mount: Deactivated successfully. Oct 9 00:44:13.346685 containerd[1605]: time="2024-10-09T00:44:13.346561704Z" level=info msg="CreateContainer within sandbox \"9683fb0d1cf814a8f7641b9c649a902c56f66dfd63248f68d8c2bf1cf9974111\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"53a6efe824c289fb72694d79f2f543c5846cb6ecbc9ed4c7c5753754021ca175\"" Oct 9 00:44:13.347529 containerd[1605]: time="2024-10-09T00:44:13.347496807Z" level=info msg="StartContainer for \"53a6efe824c289fb72694d79f2f543c5846cb6ecbc9ed4c7c5753754021ca175\"" Oct 9 00:44:13.393168 containerd[1605]: time="2024-10-09T00:44:13.393114997Z" level=info msg="StartContainer for \"53a6efe824c289fb72694d79f2f543c5846cb6ecbc9ed4c7c5753754021ca175\" returns successfully" Oct 9 00:44:13.649765 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Oct 9 00:44:14.115618 kubelet[2832]: E1009 00:44:14.115584 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:44:14.336675 kubelet[2832]: E1009 00:44:14.336597 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:44:15.513952 kubelet[2832]: E1009 00:44:15.513882 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:44:16.427341 systemd-networkd[1246]: lxc_health: Link UP Oct 9 00:44:16.434737 systemd-networkd[1246]: lxc_health: Gained carrier Oct 9 00:44:17.515804 kubelet[2832]: E1009 00:44:17.515768 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:44:17.541874 kubelet[2832]: I1009 00:44:17.541819 2832 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-js6tz" podStartSLOduration=8.541775557 podStartE2EDuration="8.541775557s" podCreationTimestamp="2024-10-09 00:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 00:44:14.35309077 +0000 UTC m=+83.326027243" watchObservedRunningTime="2024-10-09 00:44:17.541775557 +0000 UTC m=+86.514711990" Oct 9 00:44:18.179565 systemd-networkd[1246]: lxc_health: Gained IPv6LL Oct 9 00:44:18.347171 kubelet[2832]: E1009 00:44:18.347068 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:44:19.116438 kubelet[2832]: E1009 00:44:19.116386 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:44:22.114278 sshd[4660]: pam_unix(sshd:session): session closed for user core Oct 9 00:44:22.115985 kubelet[2832]: E1009 00:44:22.115545 2832 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 00:44:22.119305 systemd[1]: sshd@25-10.0.0.37:22-10.0.0.1:38838.service: Deactivated successfully. Oct 9 00:44:22.123085 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 00:44:22.124516 systemd-logind[1577]: Session 26 logged out. Waiting for processes to exit. Oct 9 00:44:22.125856 systemd-logind[1577]: Removed session 26.