Mar 17 17:31:23.894198 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:31:23.894220 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Mar 17 16:11:40 -00 2025 Mar 17 17:31:23.894230 kernel: KASLR enabled Mar 17 17:31:23.894236 kernel: efi: EFI v2.7 by EDK II Mar 17 17:31:23.894242 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Mar 17 17:31:23.894247 kernel: random: crng init done Mar 17 17:31:23.894254 kernel: secureboot: Secure boot disabled Mar 17 17:31:23.894260 kernel: ACPI: Early table checksum verification disabled Mar 17 17:31:23.894266 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 17 17:31:23.894274 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:31:23.894280 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:31:23.894286 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:31:23.894292 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:31:23.894298 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:31:23.894305 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:31:23.894313 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:31:23.894319 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:31:23.894325 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:31:23.894332 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:31:23.894338 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 17 17:31:23.894344 kernel: NUMA: Failed to initialise from firmware Mar 17 17:31:23.894350 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:31:23.894356 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] Mar 17 17:31:23.894362 kernel: Zone ranges: Mar 17 17:31:23.894368 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:31:23.894376 kernel: DMA32 empty Mar 17 17:31:23.894383 kernel: Normal empty Mar 17 17:31:23.894389 kernel: Movable zone start for each node Mar 17 17:31:23.894395 kernel: Early memory node ranges Mar 17 17:31:23.894401 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 17 17:31:23.894408 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 17 17:31:23.894414 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 17 17:31:23.894420 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 17 17:31:23.894441 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 17 17:31:23.894447 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 17 17:31:23.894454 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 17 17:31:23.894460 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 17 17:31:23.894467 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 17 17:31:23.894473 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 17:31:23.894480 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 17 17:31:23.894489 kernel: psci: probing for conduit method from ACPI. Mar 17 17:31:23.894495 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:31:23.894502 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:31:23.894510 kernel: psci: Trusted OS migration not required Mar 17 17:31:23.894517 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:31:23.894523 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 17:31:23.894529 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:31:23.894536 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:31:23.894543 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 17 17:31:23.894549 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:31:23.894556 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:31:23.894562 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:31:23.894569 kernel: CPU features: detected: Spectre-v4 Mar 17 17:31:23.894577 kernel: CPU features: detected: Spectre-BHB Mar 17 17:31:23.894583 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:31:23.894590 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:31:23.894596 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:31:23.894602 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:31:23.894609 kernel: alternatives: applying boot alternatives Mar 17 17:31:23.894616 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:31:23.894623 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:31:23.894630 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:31:23.894637 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:31:23.894643 kernel: Fallback order for Node 0: 0 Mar 17 17:31:23.894652 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 17 17:31:23.894658 kernel: Policy zone: DMA Mar 17 17:31:23.894665 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:31:23.894671 kernel: software IO TLB: area num 4. Mar 17 17:31:23.894678 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 17 17:31:23.894685 kernel: Memory: 2387544K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 184744K reserved, 0K cma-reserved) Mar 17 17:31:23.894691 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:31:23.894698 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:31:23.894705 kernel: rcu: RCU event tracing is enabled. Mar 17 17:31:23.894711 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:31:23.894718 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:31:23.894724 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:31:23.894750 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:31:23.894757 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:31:23.894763 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:31:23.894770 kernel: GICv3: 256 SPIs implemented Mar 17 17:31:23.894776 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:31:23.894783 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:31:23.894789 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:31:23.894796 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 17:31:23.894802 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 17:31:23.894809 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:31:23.894816 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:31:23.894825 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 17 17:31:23.894831 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 17 17:31:23.894838 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:31:23.894845 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:31:23.894851 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:31:23.894858 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:31:23.894865 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:31:23.894871 kernel: arm-pv: using stolen time PV Mar 17 17:31:23.894878 kernel: Console: colour dummy device 80x25 Mar 17 17:31:23.894885 kernel: ACPI: Core revision 20230628 Mar 17 17:31:23.894892 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:31:23.894900 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:31:23.894907 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:31:23.894913 kernel: landlock: Up and running. Mar 17 17:31:23.894920 kernel: SELinux: Initializing. Mar 17 17:31:23.894926 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:31:23.894933 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:31:23.894940 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:31:23.894947 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:31:23.894954 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:31:23.894962 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:31:23.894968 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 17:31:23.894975 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 17:31:23.894981 kernel: Remapping and enabling EFI services. Mar 17 17:31:23.894988 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:31:23.894995 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:31:23.895002 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 17:31:23.895008 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 17 17:31:23.895015 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:31:23.895023 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:31:23.895030 kernel: Detected PIPT I-cache on CPU2 Mar 17 17:31:23.895042 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 17 17:31:23.895050 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 17 17:31:23.895057 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:31:23.895064 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 17 17:31:23.895071 kernel: Detected PIPT I-cache on CPU3 Mar 17 17:31:23.895078 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 17 17:31:23.895085 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 17 17:31:23.895094 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:31:23.895101 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 17 17:31:23.895108 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:31:23.895114 kernel: SMP: Total of 4 processors activated. Mar 17 17:31:23.895121 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:31:23.895129 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:31:23.895136 kernel: CPU features: detected: Common not Private translations Mar 17 17:31:23.895143 kernel: CPU features: detected: CRC32 instructions Mar 17 17:31:23.895151 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 17 17:31:23.895159 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:31:23.895166 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:31:23.895173 kernel: CPU features: detected: Privileged Access Never Mar 17 17:31:23.895180 kernel: CPU features: detected: RAS Extension Support Mar 17 17:31:23.895187 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 17:31:23.895194 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:31:23.895201 kernel: alternatives: applying system-wide alternatives Mar 17 17:31:23.895208 kernel: devtmpfs: initialized Mar 17 17:31:23.895215 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:31:23.895224 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:31:23.895236 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:31:23.895243 kernel: SMBIOS 3.0.0 present. Mar 17 17:31:23.895250 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 17 17:31:23.895257 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:31:23.895264 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:31:23.895271 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:31:23.895279 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:31:23.895287 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:31:23.895295 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Mar 17 17:31:23.895302 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:31:23.895309 kernel: cpuidle: using governor menu Mar 17 17:31:23.895316 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:31:23.895323 kernel: ASID allocator initialised with 32768 entries Mar 17 17:31:23.895331 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:31:23.895338 kernel: Serial: AMBA PL011 UART driver Mar 17 17:31:23.895344 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:31:23.895353 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:31:23.895360 kernel: Modules: 509280 pages in range for PLT usage Mar 17 17:31:23.895367 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:31:23.895374 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:31:23.895382 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:31:23.895389 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:31:23.895396 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:31:23.895403 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:31:23.895411 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:31:23.895419 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:31:23.895426 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:31:23.895433 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:31:23.895440 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:31:23.895447 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:31:23.895454 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:31:23.895461 kernel: ACPI: Interpreter enabled Mar 17 17:31:23.895468 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:31:23.895475 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:31:23.895482 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:31:23.895490 kernel: printk: console [ttyAMA0] enabled Mar 17 17:31:23.895498 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:31:23.895642 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:31:23.895718 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:31:23.895823 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:31:23.895896 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 17:31:23.895965 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 17:31:23.895979 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 17:31:23.895986 kernel: PCI host bridge to bus 0000:00 Mar 17 17:31:23.896063 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 17:31:23.896130 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:31:23.896201 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 17:31:23.896266 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:31:23.896354 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 17:31:23.896449 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:31:23.896525 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 17 17:31:23.896606 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 17 17:31:23.896683 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:31:23.896885 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 17:31:23.896963 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 17 17:31:23.897038 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 17 17:31:23.897104 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 17:31:23.897166 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:31:23.897227 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 17:31:23.897237 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:31:23.897244 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:31:23.897251 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:31:23.897258 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:31:23.897268 kernel: iommu: Default domain type: Translated Mar 17 17:31:23.897276 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:31:23.897283 kernel: efivars: Registered efivars operations Mar 17 17:31:23.897291 kernel: vgaarb: loaded Mar 17 17:31:23.897298 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:31:23.897305 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:31:23.897313 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:31:23.897320 kernel: pnp: PnP ACPI init Mar 17 17:31:23.897396 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 17:31:23.897410 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:31:23.897417 kernel: NET: Registered PF_INET protocol family Mar 17 17:31:23.897424 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:31:23.897432 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:31:23.897439 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:31:23.897446 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:31:23.897454 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:31:23.897461 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:31:23.897470 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:31:23.897478 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:31:23.897485 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:31:23.897492 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:31:23.897500 kernel: kvm [1]: HYP mode not available Mar 17 17:31:23.897507 kernel: Initialise system trusted keyrings Mar 17 17:31:23.897514 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:31:23.897521 kernel: Key type asymmetric registered Mar 17 17:31:23.897528 kernel: Asymmetric key parser 'x509' registered Mar 17 17:31:23.897535 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:31:23.897544 kernel: io scheduler mq-deadline registered Mar 17 17:31:23.897551 kernel: io scheduler kyber registered Mar 17 17:31:23.897558 kernel: io scheduler bfq registered Mar 17 17:31:23.897565 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:31:23.897573 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:31:23.897580 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:31:23.897650 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 17 17:31:23.897661 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:31:23.897668 kernel: thunder_xcv, ver 1.0 Mar 17 17:31:23.897678 kernel: thunder_bgx, ver 1.0 Mar 17 17:31:23.897685 kernel: nicpf, ver 1.0 Mar 17 17:31:23.897692 kernel: nicvf, ver 1.0 Mar 17 17:31:23.897795 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:31:23.897868 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:31:23 UTC (1742232683) Mar 17 17:31:23.897878 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:31:23.897886 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 17:31:23.897893 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:31:23.897904 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:31:23.897911 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:31:23.897918 kernel: Segment Routing with IPv6 Mar 17 17:31:23.897926 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:31:23.897933 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:31:23.897940 kernel: Key type dns_resolver registered Mar 17 17:31:23.897947 kernel: registered taskstats version 1 Mar 17 17:31:23.897955 kernel: Loading compiled-in X.509 certificates Mar 17 17:31:23.897962 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: f4ff2820cf7379ce82b759137d15b536f0a99b51' Mar 17 17:31:23.897971 kernel: Key type .fscrypt registered Mar 17 17:31:23.897978 kernel: Key type fscrypt-provisioning registered Mar 17 17:31:23.897986 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:31:23.897993 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:31:23.898000 kernel: ima: No architecture policies found Mar 17 17:31:23.898007 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:31:23.898015 kernel: clk: Disabling unused clocks Mar 17 17:31:23.898022 kernel: Freeing unused kernel memory: 38336K Mar 17 17:31:23.898030 kernel: Run /init as init process Mar 17 17:31:23.898037 kernel: with arguments: Mar 17 17:31:23.898045 kernel: /init Mar 17 17:31:23.898051 kernel: with environment: Mar 17 17:31:23.898058 kernel: HOME=/ Mar 17 17:31:23.898065 kernel: TERM=linux Mar 17 17:31:23.898072 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:31:23.898080 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:31:23.898090 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:31:23.898100 systemd[1]: Detected virtualization kvm. Mar 17 17:31:23.898108 systemd[1]: Detected architecture arm64. Mar 17 17:31:23.898115 systemd[1]: Running in initrd. Mar 17 17:31:23.898123 systemd[1]: No hostname configured, using default hostname. Mar 17 17:31:23.898131 systemd[1]: Hostname set to . Mar 17 17:31:23.898139 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:31:23.898146 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:31:23.898155 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:31:23.898163 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:31:23.898172 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:31:23.898180 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:31:23.898188 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:31:23.898197 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:31:23.898206 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:31:23.898215 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:31:23.898223 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:31:23.898231 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:31:23.898239 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:31:23.898247 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:31:23.898255 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:31:23.898263 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:31:23.898270 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:31:23.898278 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:31:23.898287 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:31:23.898295 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:31:23.898303 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:31:23.898311 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:31:23.898319 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:31:23.898327 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:31:23.898335 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:31:23.898342 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:31:23.898352 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:31:23.898359 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:31:23.898367 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:31:23.898376 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:31:23.898383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:31:23.898391 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:31:23.898399 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:31:23.898409 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:31:23.898435 systemd-journald[237]: Collecting audit messages is disabled. Mar 17 17:31:23.898456 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:31:23.898465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:31:23.898473 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:31:23.898482 systemd-journald[237]: Journal started Mar 17 17:31:23.898501 systemd-journald[237]: Runtime Journal (/run/log/journal/3a34a918cb074445a0a79d700152bc58) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:31:23.883558 systemd-modules-load[239]: Inserted module 'overlay' Mar 17 17:31:23.901289 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:31:23.901318 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:31:23.902519 systemd-modules-load[239]: Inserted module 'br_netfilter' Mar 17 17:31:23.903278 kernel: Bridge firewalling registered Mar 17 17:31:23.904774 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:31:23.907173 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:31:23.908783 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:31:23.912709 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:31:23.915184 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:31:23.917663 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:31:23.923802 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:31:23.926640 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:31:23.937931 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:31:23.938957 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:31:23.941346 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:31:23.956306 dracut-cmdline[283]: dracut-dracut-053 Mar 17 17:31:23.958786 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:31:23.973062 systemd-resolved[279]: Positive Trust Anchors: Mar 17 17:31:23.973077 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:31:23.973108 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:31:23.978901 systemd-resolved[279]: Defaulting to hostname 'linux'. Mar 17 17:31:23.979979 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:31:23.980836 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:31:24.033771 kernel: SCSI subsystem initialized Mar 17 17:31:24.038756 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:31:24.047753 kernel: iscsi: registered transport (tcp) Mar 17 17:31:24.061769 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:31:24.061793 kernel: QLogic iSCSI HBA Driver Mar 17 17:31:24.108840 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:31:24.119914 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:31:24.136487 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:31:24.136535 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:31:24.136561 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:31:24.185768 kernel: raid6: neonx8 gen() 15785 MB/s Mar 17 17:31:24.202743 kernel: raid6: neonx4 gen() 15808 MB/s Mar 17 17:31:24.219744 kernel: raid6: neonx2 gen() 12704 MB/s Mar 17 17:31:24.236747 kernel: raid6: neonx1 gen() 10450 MB/s Mar 17 17:31:24.253743 kernel: raid6: int64x8 gen() 6780 MB/s Mar 17 17:31:24.270744 kernel: raid6: int64x4 gen() 7350 MB/s Mar 17 17:31:24.287747 kernel: raid6: int64x2 gen() 6112 MB/s Mar 17 17:31:24.304753 kernel: raid6: int64x1 gen() 5055 MB/s Mar 17 17:31:24.304787 kernel: raid6: using algorithm neonx4 gen() 15808 MB/s Mar 17 17:31:24.321754 kernel: raid6: .... xor() 12500 MB/s, rmw enabled Mar 17 17:31:24.321767 kernel: raid6: using neon recovery algorithm Mar 17 17:31:24.326749 kernel: xor: measuring software checksum speed Mar 17 17:31:24.326769 kernel: 8regs : 21618 MB/sec Mar 17 17:31:24.326779 kernel: 32regs : 20010 MB/sec Mar 17 17:31:24.328110 kernel: arm64_neon : 27879 MB/sec Mar 17 17:31:24.328123 kernel: xor: using function: arm64_neon (27879 MB/sec) Mar 17 17:31:24.381771 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:31:24.392354 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:31:24.400927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:31:24.413718 systemd-udevd[465]: Using default interface naming scheme 'v255'. Mar 17 17:31:24.417487 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:31:24.429933 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:31:24.441179 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Mar 17 17:31:24.468819 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:31:24.482924 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:31:24.523354 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:31:24.528892 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:31:24.541788 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:31:24.543621 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:31:24.545712 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:31:24.546664 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:31:24.552886 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:31:24.564425 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:31:24.575797 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 17 17:31:24.583301 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:31:24.583408 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:31:24.583426 kernel: GPT:9289727 != 19775487 Mar 17 17:31:24.583436 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:31:24.583447 kernel: GPT:9289727 != 19775487 Mar 17 17:31:24.583455 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:31:24.583464 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:31:24.591908 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:31:24.592056 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:31:24.595236 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:31:24.596173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:31:24.596378 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:31:24.599688 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:31:24.605757 kernel: BTRFS: device fsid 5ecee764-de70-4de1-8711-3798360e0d13 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (513) Mar 17 17:31:24.605788 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (525) Mar 17 17:31:24.612916 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:31:24.623779 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:31:24.631425 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:31:24.638841 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:31:24.649028 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:31:24.649927 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:31:24.657864 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:31:24.666885 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:31:24.668396 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:31:24.686281 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:31:24.744347 disk-uuid[553]: Primary Header is updated. Mar 17 17:31:24.744347 disk-uuid[553]: Secondary Entries is updated. Mar 17 17:31:24.744347 disk-uuid[553]: Secondary Header is updated. Mar 17 17:31:24.747765 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:31:25.760745 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:31:25.761250 disk-uuid[562]: The operation has completed successfully. Mar 17 17:31:25.786952 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:31:25.787048 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:31:25.821871 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:31:25.824491 sh[575]: Success Mar 17 17:31:25.833881 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:31:25.863592 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:31:25.876065 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:31:25.877530 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:31:25.887249 kernel: BTRFS info (device dm-0): first mount of filesystem 5ecee764-de70-4de1-8711-3798360e0d13 Mar 17 17:31:25.887279 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:31:25.888091 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:31:25.888105 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:31:25.889140 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:31:25.892032 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:31:25.893084 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:31:25.901920 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:31:25.903236 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:31:25.912388 kernel: BTRFS info (device vda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:31:25.912427 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:31:25.912438 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:31:25.915758 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:31:25.924363 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:31:25.925756 kernel: BTRFS info (device vda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:31:25.930619 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:31:25.939935 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:31:25.994574 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:31:26.001927 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:31:26.030927 ignition[674]: Ignition 2.20.0 Mar 17 17:31:26.030937 ignition[674]: Stage: fetch-offline Mar 17 17:31:26.030972 ignition[674]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:31:26.030981 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:31:26.031133 ignition[674]: parsed url from cmdline: "" Mar 17 17:31:26.031137 ignition[674]: no config URL provided Mar 17 17:31:26.031141 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:31:26.031148 ignition[674]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:31:26.035159 systemd-networkd[772]: lo: Link UP Mar 17 17:31:26.031168 ignition[674]: op(1): [started] loading QEMU firmware config module Mar 17 17:31:26.035163 systemd-networkd[772]: lo: Gained carrier Mar 17 17:31:26.031173 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:31:26.035964 systemd-networkd[772]: Enumeration completed Mar 17 17:31:26.036073 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:31:26.036325 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:31:26.036329 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:31:26.043314 ignition[674]: op(1): [finished] loading QEMU firmware config module Mar 17 17:31:26.036919 systemd-networkd[772]: eth0: Link UP Mar 17 17:31:26.036922 systemd-networkd[772]: eth0: Gained carrier Mar 17 17:31:26.036927 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:31:26.038268 systemd[1]: Reached target network.target - Network. Mar 17 17:31:26.059774 systemd-networkd[772]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:31:26.084206 ignition[674]: parsing config with SHA512: fba4ac8e55e95dc2bb7eb78fced64def509a493a746f88e7ab946581f4525d7420183880faeaa7ba3a2487738cd5361b78623e36ece3a46cc4e26f573d454e74 Mar 17 17:31:26.090054 unknown[674]: fetched base config from "system" Mar 17 17:31:26.090070 unknown[674]: fetched user config from "qemu" Mar 17 17:31:26.091314 ignition[674]: fetch-offline: fetch-offline passed Mar 17 17:31:26.091403 ignition[674]: Ignition finished successfully Mar 17 17:31:26.092924 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:31:26.093950 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:31:26.105900 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:31:26.117520 ignition[778]: Ignition 2.20.0 Mar 17 17:31:26.117531 ignition[778]: Stage: kargs Mar 17 17:31:26.117684 ignition[778]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:31:26.117694 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:31:26.118587 ignition[778]: kargs: kargs passed Mar 17 17:31:26.118630 ignition[778]: Ignition finished successfully Mar 17 17:31:26.121503 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:31:26.132932 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:31:26.142027 ignition[786]: Ignition 2.20.0 Mar 17 17:31:26.142036 ignition[786]: Stage: disks Mar 17 17:31:26.142198 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:31:26.142207 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:31:26.143077 ignition[786]: disks: disks passed Mar 17 17:31:26.144316 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:31:26.143123 ignition[786]: Ignition finished successfully Mar 17 17:31:26.146929 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:31:26.147755 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:31:26.149317 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:31:26.150714 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:31:26.152022 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:31:26.163898 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:31:26.174605 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:31:26.179065 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:31:26.894819 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:31:26.947753 kernel: EXT4-fs (vda9): mounted filesystem 3914ef65-c5cd-468c-8ee7-964383d8e9e2 r/w with ordered data mode. Quota mode: none. Mar 17 17:31:26.948349 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:31:26.949387 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:31:26.959812 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:31:26.961252 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:31:26.962181 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:31:26.962271 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:31:26.967750 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (806) Mar 17 17:31:26.962324 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:31:26.967878 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:31:26.971263 kernel: BTRFS info (device vda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:31:26.971289 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:31:26.971306 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:31:26.973264 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:31:26.975744 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:31:26.976690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:31:27.016140 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:31:27.020050 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:31:27.023662 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:31:27.027661 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:31:27.096536 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:31:27.104842 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:31:27.106223 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:31:27.112742 kernel: BTRFS info (device vda6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:31:27.127252 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:31:27.129588 ignition[919]: INFO : Ignition 2.20.0 Mar 17 17:31:27.129588 ignition[919]: INFO : Stage: mount Mar 17 17:31:27.130813 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:31:27.130813 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:31:27.130813 ignition[919]: INFO : mount: mount passed Mar 17 17:31:27.130813 ignition[919]: INFO : Ignition finished successfully Mar 17 17:31:27.131940 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:31:27.138830 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:31:27.886523 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:31:27.900936 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:31:27.907780 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (932) Mar 17 17:31:27.907817 kernel: BTRFS info (device vda6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:31:27.907828 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:31:27.908913 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:31:27.910754 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:31:27.911799 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:31:27.927441 ignition[949]: INFO : Ignition 2.20.0 Mar 17 17:31:27.927441 ignition[949]: INFO : Stage: files Mar 17 17:31:27.928653 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:31:27.928653 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:31:27.928653 ignition[949]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:31:27.931245 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:31:27.931245 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:31:27.933623 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:31:27.934622 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:31:27.934622 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:31:27.934089 unknown[949]: wrote ssh authorized keys file for user: core Mar 17 17:31:27.937472 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:31:27.938982 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 17:31:27.951880 systemd-networkd[772]: eth0: Gained IPv6LL Mar 17 17:31:27.985370 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:31:28.073370 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:31:28.074972 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:31:28.074972 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:31:28.408124 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:31:28.487173 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:31:28.488619 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:31:28.488619 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:31:28.488619 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:31:28.488619 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:31:28.488619 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:31:28.488619 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:31:28.488619 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:31:28.488619 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:31:28.488619 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:31:28.488619 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:31:28.488619 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:31:28.488619 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:31:28.488619 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:31:28.488619 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 17:31:28.712109 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:31:28.983658 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:31:28.983658 ignition[949]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:31:28.986648 ignition[949]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:31:28.986648 ignition[949]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:31:28.986648 ignition[949]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:31:28.986648 ignition[949]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 17:31:28.986648 ignition[949]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:31:28.986648 ignition[949]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:31:28.986648 ignition[949]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 17:31:28.986648 ignition[949]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:31:29.005606 ignition[949]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:31:29.009417 ignition[949]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:31:29.009417 ignition[949]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:31:29.009417 ignition[949]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:31:29.009417 ignition[949]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:31:29.015127 ignition[949]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:31:29.015127 ignition[949]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:31:29.015127 ignition[949]: INFO : files: files passed Mar 17 17:31:29.015127 ignition[949]: INFO : Ignition finished successfully Mar 17 17:31:29.013769 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:31:29.031928 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:31:29.034928 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:31:29.036281 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:31:29.036364 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:31:29.042063 initrd-setup-root-after-ignition[978]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:31:29.045193 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:31:29.045193 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:31:29.048412 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:31:29.049760 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:31:29.051002 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:31:29.058864 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:31:29.076881 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:31:29.077898 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:31:29.079260 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:31:29.080805 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:31:29.082496 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:31:29.083260 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:31:29.097424 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:31:29.110913 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:31:29.118879 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:31:29.119810 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:31:29.121619 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:31:29.123306 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:31:29.123424 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:31:29.125591 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:31:29.126478 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:31:29.128072 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:31:29.129575 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:31:29.131087 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:31:29.132673 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:31:29.134334 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:31:29.136069 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:31:29.137565 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:31:29.139286 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:31:29.140627 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:31:29.140774 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:31:29.142885 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:31:29.144504 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:31:29.146136 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:31:29.146805 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:31:29.147832 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:31:29.147940 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:31:29.150345 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:31:29.150456 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:31:29.152552 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:31:29.153919 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:31:29.158770 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:31:29.159721 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:31:29.161674 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:31:29.163079 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:31:29.163164 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:31:29.164496 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:31:29.164567 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:31:29.165987 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:31:29.166096 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:31:29.167664 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:31:29.167785 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:31:29.183933 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:31:29.184662 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:31:29.184816 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:31:29.187346 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:31:29.188055 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:31:29.188177 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:31:29.190082 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:31:29.190196 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:31:29.195634 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:31:29.195754 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:31:29.198910 ignition[1005]: INFO : Ignition 2.20.0 Mar 17 17:31:29.198910 ignition[1005]: INFO : Stage: umount Mar 17 17:31:29.198910 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:31:29.198910 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:31:29.198910 ignition[1005]: INFO : umount: umount passed Mar 17 17:31:29.198910 ignition[1005]: INFO : Ignition finished successfully Mar 17 17:31:29.198452 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:31:29.198568 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:31:29.200335 systemd[1]: Stopped target network.target - Network. Mar 17 17:31:29.201339 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:31:29.201405 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:31:29.203035 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:31:29.203083 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:31:29.204914 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:31:29.204959 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:31:29.206830 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:31:29.206879 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:31:29.208742 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:31:29.210368 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:31:29.212996 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:31:29.216897 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:31:29.217026 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:31:29.219913 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:31:29.220159 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:31:29.220200 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:31:29.223763 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:31:29.225898 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:31:29.226026 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:31:29.230590 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:31:29.230821 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:31:29.230851 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:31:29.242837 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:31:29.243710 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:31:29.243797 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:31:29.245833 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:31:29.245882 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:31:29.248883 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:31:29.248937 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:31:29.251017 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:31:29.253791 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:31:29.259763 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:31:29.259866 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:31:29.273430 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:31:29.273582 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:31:29.275792 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:31:29.275875 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:31:29.277615 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:31:29.277671 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:31:29.278879 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:31:29.278912 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:31:29.279999 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:31:29.280049 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:31:29.282279 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:31:29.282329 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:31:29.284521 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:31:29.284566 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:31:29.286947 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:31:29.286998 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:31:29.299899 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:31:29.300955 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:31:29.301019 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:31:29.303586 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:31:29.303630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:31:29.307781 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:31:29.307876 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:31:29.309850 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:31:29.312931 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:31:29.321321 systemd[1]: Switching root. Mar 17 17:31:29.354758 systemd-journald[237]: Journal stopped Mar 17 17:31:30.102407 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 17 17:31:30.102463 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:31:30.102475 kernel: SELinux: policy capability open_perms=1 Mar 17 17:31:30.102484 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:31:30.102493 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:31:30.102503 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:31:30.102512 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:31:30.102522 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:31:30.102535 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:31:30.102545 kernel: audit: type=1403 audit(1742232689.519:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:31:30.102557 systemd[1]: Successfully loaded SELinux policy in 31.027ms. Mar 17 17:31:30.102580 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.271ms. Mar 17 17:31:30.102591 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:31:30.102602 systemd[1]: Detected virtualization kvm. Mar 17 17:31:30.102612 systemd[1]: Detected architecture arm64. Mar 17 17:31:30.102622 systemd[1]: Detected first boot. Mar 17 17:31:30.102632 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:31:30.102644 zram_generator::config[1051]: No configuration found. Mar 17 17:31:30.102655 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:31:30.102665 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:31:30.102676 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:31:30.102696 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:31:30.102710 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:31:30.102721 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:31:30.102743 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:31:30.102756 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:31:30.102767 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:31:30.102778 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:31:30.102788 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:31:30.102798 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:31:30.102809 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:31:30.102820 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:31:30.102830 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:31:30.102841 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:31:30.102853 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:31:30.102863 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:31:30.102873 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:31:30.102883 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:31:30.102893 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:31:30.102903 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:31:30.102913 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:31:30.102925 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:31:30.102935 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:31:30.102945 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:31:30.102956 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:31:30.102966 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:31:30.102976 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:31:30.102987 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:31:30.102997 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:31:30.103007 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:31:30.103019 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:31:30.103029 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:31:30.103040 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:31:30.103050 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:31:30.103060 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:31:30.103070 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:31:30.103080 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:31:30.103090 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:31:30.103100 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:31:30.103112 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:31:30.103122 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:31:30.103133 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:31:30.103143 systemd[1]: Reached target machines.target - Containers. Mar 17 17:31:30.103153 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:31:30.103168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:31:30.103179 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:31:30.103203 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:31:30.103214 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:31:30.103226 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:31:30.103236 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:31:30.103246 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:31:30.103256 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:31:30.103267 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:31:30.103279 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:31:30.103289 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:31:30.103299 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:31:30.103312 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:31:30.103323 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:31:30.103333 kernel: fuse: init (API version 7.39) Mar 17 17:31:30.103342 kernel: loop: module loaded Mar 17 17:31:30.103351 kernel: ACPI: bus type drm_connector registered Mar 17 17:31:30.103360 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:31:30.103371 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:31:30.103381 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:31:30.103392 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:31:30.103404 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:31:30.103414 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:31:30.103424 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:31:30.103434 systemd[1]: Stopped verity-setup.service. Mar 17 17:31:30.103464 systemd-journald[1121]: Collecting audit messages is disabled. Mar 17 17:31:30.103488 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:31:30.103499 systemd-journald[1121]: Journal started Mar 17 17:31:30.103520 systemd-journald[1121]: Runtime Journal (/run/log/journal/3a34a918cb074445a0a79d700152bc58) is 5.9M, max 47.3M, 41.4M free. Mar 17 17:31:29.911131 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:31:29.924677 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:31:29.925071 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:31:30.104825 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:31:30.106173 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:31:30.107439 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:31:30.108647 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:31:30.110162 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:31:30.111464 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:31:30.114908 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:31:30.116401 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:31:30.117912 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:31:30.118070 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:31:30.119561 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:31:30.119757 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:31:30.121122 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:31:30.121285 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:31:30.122580 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:31:30.122792 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:31:30.124363 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:31:30.124524 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:31:30.126022 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:31:30.126171 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:31:30.127556 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:31:30.128967 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:31:30.130674 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:31:30.132248 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:31:30.145181 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:31:30.156841 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:31:30.159114 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:31:30.160266 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:31:30.160311 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:31:30.162372 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:31:30.164777 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:31:30.166973 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:31:30.168119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:31:30.169893 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:31:30.173986 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:31:30.174880 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:31:30.181773 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:31:30.182005 systemd-journald[1121]: Time spent on flushing to /var/log/journal/3a34a918cb074445a0a79d700152bc58 is 24.422ms for 867 entries. Mar 17 17:31:30.182005 systemd-journald[1121]: System Journal (/var/log/journal/3a34a918cb074445a0a79d700152bc58) is 8M, max 195.6M, 187.6M free. Mar 17 17:31:30.220859 systemd-journald[1121]: Received client request to flush runtime journal. Mar 17 17:31:30.220916 kernel: loop0: detected capacity change from 0 to 194096 Mar 17 17:31:30.183874 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:31:30.185269 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:31:30.187971 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:31:30.192918 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:31:30.195765 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:31:30.197223 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:31:30.198712 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:31:30.200407 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:31:30.202794 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:31:30.206984 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:31:30.216973 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:31:30.219935 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:31:30.221888 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:31:30.223902 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:31:30.238898 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:31:30.238318 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:31:30.250966 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:31:30.260081 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:31:30.262122 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:31:30.279420 kernel: loop1: detected capacity change from 0 to 123192 Mar 17 17:31:30.285995 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 17 17:31:30.286351 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 17 17:31:30.290985 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:31:30.309754 kernel: loop2: detected capacity change from 0 to 113512 Mar 17 17:31:30.351776 kernel: loop3: detected capacity change from 0 to 194096 Mar 17 17:31:30.361764 kernel: loop4: detected capacity change from 0 to 123192 Mar 17 17:31:30.371770 kernel: loop5: detected capacity change from 0 to 113512 Mar 17 17:31:30.384906 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:31:30.385322 (sd-merge)[1194]: Merged extensions into '/usr'. Mar 17 17:31:30.388667 systemd[1]: Reload requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:31:30.388694 systemd[1]: Reloading... Mar 17 17:31:30.455901 zram_generator::config[1221]: No configuration found. Mar 17 17:31:30.507418 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:31:30.548451 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:31:30.598630 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:31:30.598930 systemd[1]: Reloading finished in 209 ms. Mar 17 17:31:30.615568 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:31:30.616789 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:31:30.631171 systemd[1]: Starting ensure-sysext.service... Mar 17 17:31:30.636115 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:31:30.653115 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:31:30.653683 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:31:30.654114 systemd[1]: Reload requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:31:30.654135 systemd[1]: Reloading... Mar 17 17:31:30.654453 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:31:30.654652 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 17 17:31:30.654718 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 17 17:31:30.657921 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:31:30.658059 systemd-tmpfiles[1257]: Skipping /boot Mar 17 17:31:30.667178 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:31:30.667347 systemd-tmpfiles[1257]: Skipping /boot Mar 17 17:31:30.701756 zram_generator::config[1283]: No configuration found. Mar 17 17:31:30.790603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:31:30.841297 systemd[1]: Reloading finished in 186 ms. Mar 17 17:31:30.852301 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:31:30.867822 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:31:30.875498 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:31:30.877648 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:31:30.879965 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:31:30.883518 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:31:30.888139 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:31:30.892665 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:31:30.897520 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:31:30.904258 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:31:30.909458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:31:30.914055 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:31:30.915027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:31:30.915145 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:31:30.922137 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:31:30.924841 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:31:30.926549 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:31:30.926774 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:31:30.930726 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:31:30.930931 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:31:30.935792 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:31:30.936004 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:31:30.938273 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Mar 17 17:31:30.944235 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:31:30.951992 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:31:30.957033 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:31:30.960048 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:31:30.961371 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:31:30.961511 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:31:30.965001 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:31:30.967177 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:31:30.970704 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:31:30.972335 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:31:30.974435 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:31:30.976219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:31:30.976377 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:31:30.979923 augenrules[1376]: No rules Mar 17 17:31:30.984123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:31:30.984547 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:31:30.986658 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:31:30.986899 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:31:30.989348 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:31:30.989512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:31:31.004486 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:31:31.019765 systemd[1]: Finished ensure-sysext.service. Mar 17 17:31:31.024726 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:31:31.032793 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1382) Mar 17 17:31:31.033952 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:31:31.035926 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:31:31.038661 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:31:31.048794 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:31:31.051641 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:31:31.054012 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:31:31.055173 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:31:31.055224 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:31:31.057150 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:31:31.069361 systemd-resolved[1326]: Positive Trust Anchors: Mar 17 17:31:31.069382 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:31:31.081100 augenrules[1398]: /sbin/augenrules: No change Mar 17 17:31:31.069413 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:31:31.079706 systemd-resolved[1326]: Defaulting to hostname 'linux'. Mar 17 17:31:31.082968 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:31:31.084553 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:31:31.085133 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:31:31.085310 augenrules[1424]: No rules Mar 17 17:31:31.086971 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:31:31.087181 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:31:31.088234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:31:31.088409 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:31:31.091247 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:31:31.091406 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:31:31.092641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:31:31.093015 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:31:31.094196 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:31:31.094476 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:31:31.117448 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:31:31.118578 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:31:31.126004 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:31:31.126874 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:31:31.126945 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:31:31.153040 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:31:31.170202 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:31:31.171689 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:31:31.174096 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:31:31.180389 systemd-networkd[1411]: lo: Link UP Mar 17 17:31:31.180405 systemd-networkd[1411]: lo: Gained carrier Mar 17 17:31:31.181377 systemd-networkd[1411]: Enumeration completed Mar 17 17:31:31.181558 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:31:31.182885 systemd[1]: Reached target network.target - Network. Mar 17 17:31:31.183662 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:31:31.183678 systemd-networkd[1411]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:31:31.189725 systemd-networkd[1411]: eth0: Link UP Mar 17 17:31:31.189754 systemd-networkd[1411]: eth0: Gained carrier Mar 17 17:31:31.189769 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:31:31.194901 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:31:31.197352 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:31:31.198804 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:31:31.204938 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:31:31.214558 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:31:31.217822 systemd-networkd[1411]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:31:31.219893 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Mar 17 17:31:31.220483 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:31:31.221836 systemd-timesyncd[1423]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:31:31.221892 systemd-timesyncd[1423]: Initial clock synchronization to Mon 2025-03-17 17:31:30.996587 UTC. Mar 17 17:31:31.224746 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:31:31.256829 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:31:31.258316 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:31:31.259440 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:31:31.260599 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:31:31.261840 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:31:31.263197 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:31:31.264388 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:31:31.265586 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:31:31.266839 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:31:31.266876 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:31:31.267717 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:31:31.269522 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:31:31.271825 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:31:31.274925 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:31:31.276292 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:31:31.277535 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:31:31.283632 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:31:31.285065 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:31:31.287301 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:31:31.288871 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:31:31.289987 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:31:31.290894 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:31:31.291832 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:31:31.291863 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:31:31.292752 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:31:31.294532 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:31:31.294906 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:31:31.296858 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:31:31.300719 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:31:31.304309 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:31:31.305318 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:31:31.308924 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:31:31.316796 jq[1460]: false Mar 17 17:31:31.312467 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:31:31.314579 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:31:31.318209 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:31:31.320543 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:31:31.321019 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:31:31.324279 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:31:31.327095 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:31:31.329003 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:31:31.334082 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:31:31.334293 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:31:31.335142 dbus-daemon[1459]: [system] SELinux support is enabled Mar 17 17:31:31.338206 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:31:31.339125 jq[1472]: true Mar 17 17:31:31.345829 extend-filesystems[1461]: Found loop3 Mar 17 17:31:31.345829 extend-filesystems[1461]: Found loop4 Mar 17 17:31:31.345829 extend-filesystems[1461]: Found loop5 Mar 17 17:31:31.345829 extend-filesystems[1461]: Found vda Mar 17 17:31:31.345829 extend-filesystems[1461]: Found vda1 Mar 17 17:31:31.345829 extend-filesystems[1461]: Found vda2 Mar 17 17:31:31.345829 extend-filesystems[1461]: Found vda3 Mar 17 17:31:31.345829 extend-filesystems[1461]: Found usr Mar 17 17:31:31.345829 extend-filesystems[1461]: Found vda4 Mar 17 17:31:31.345829 extend-filesystems[1461]: Found vda6 Mar 17 17:31:31.345829 extend-filesystems[1461]: Found vda7 Mar 17 17:31:31.345829 extend-filesystems[1461]: Found vda9 Mar 17 17:31:31.345829 extend-filesystems[1461]: Checking size of /dev/vda9 Mar 17 17:31:31.343100 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:31:31.343284 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:31:31.344785 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:31:31.344945 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:31:31.369589 jq[1481]: true Mar 17 17:31:31.370558 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:31:31.374315 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:31:31.374359 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:31:31.375966 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:31:31.375983 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:31:31.377608 tar[1478]: linux-arm64/helm Mar 17 17:31:31.383471 update_engine[1470]: I20250317 17:31:31.383323 1470 main.cc:92] Flatcar Update Engine starting Mar 17 17:31:31.389827 extend-filesystems[1461]: Resized partition /dev/vda9 Mar 17 17:31:31.390801 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:31:31.393647 update_engine[1470]: I20250317 17:31:31.393433 1470 update_check_scheduler.cc:74] Next update check in 11m43s Mar 17 17:31:31.393686 extend-filesystems[1501]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:31:31.398905 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:31:31.409813 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:31:31.420735 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:31:31.425831 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1372) Mar 17 17:31:31.426034 systemd-logind[1468]: New seat seat0. Mar 17 17:31:31.432581 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:31:31.447781 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:31:31.475946 extend-filesystems[1501]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:31:31.475946 extend-filesystems[1501]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:31:31.475946 extend-filesystems[1501]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:31:31.487104 extend-filesystems[1461]: Resized filesystem in /dev/vda9 Mar 17 17:31:31.487808 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:31:31.477271 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:31:31.477514 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:31:31.481897 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:31:31.485418 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:31:31.493141 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:31:31.586830 containerd[1483]: time="2025-03-17T17:31:31.586222160Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:31:31.619012 containerd[1483]: time="2025-03-17T17:31:31.618959200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:31:31.620468 containerd[1483]: time="2025-03-17T17:31:31.620428760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:31:31.620556 containerd[1483]: time="2025-03-17T17:31:31.620540640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:31:31.620610 containerd[1483]: time="2025-03-17T17:31:31.620598480Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:31:31.620875 containerd[1483]: time="2025-03-17T17:31:31.620851120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:31:31.621532 containerd[1483]: time="2025-03-17T17:31:31.620932400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:31:31.621532 containerd[1483]: time="2025-03-17T17:31:31.621005800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:31:31.621532 containerd[1483]: time="2025-03-17T17:31:31.621021720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:31:31.621532 containerd[1483]: time="2025-03-17T17:31:31.621212840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:31:31.621532 containerd[1483]: time="2025-03-17T17:31:31.621227240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:31:31.621532 containerd[1483]: time="2025-03-17T17:31:31.621239720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:31:31.621532 containerd[1483]: time="2025-03-17T17:31:31.621249240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:31:31.621532 containerd[1483]: time="2025-03-17T17:31:31.621322440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:31:31.621532 containerd[1483]: time="2025-03-17T17:31:31.621506120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:31:31.621763 containerd[1483]: time="2025-03-17T17:31:31.621616640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:31:31.621763 containerd[1483]: time="2025-03-17T17:31:31.621629400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:31:31.621806 containerd[1483]: time="2025-03-17T17:31:31.621764360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:31:31.621943 containerd[1483]: time="2025-03-17T17:31:31.621818880Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:31:31.677341 containerd[1483]: time="2025-03-17T17:31:31.677277480Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:31:31.677467 containerd[1483]: time="2025-03-17T17:31:31.677362720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:31:31.677467 containerd[1483]: time="2025-03-17T17:31:31.677380120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:31:31.677467 containerd[1483]: time="2025-03-17T17:31:31.677397320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:31:31.677467 containerd[1483]: time="2025-03-17T17:31:31.677414240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:31:31.677715 containerd[1483]: time="2025-03-17T17:31:31.677601840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:31:31.677887 containerd[1483]: time="2025-03-17T17:31:31.677866880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:31:31.677997 containerd[1483]: time="2025-03-17T17:31:31.677979600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:31:31.678022 containerd[1483]: time="2025-03-17T17:31:31.678002480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:31:31.678022 containerd[1483]: time="2025-03-17T17:31:31.678017040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:31:31.678074 containerd[1483]: time="2025-03-17T17:31:31.678030120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:31:31.678074 containerd[1483]: time="2025-03-17T17:31:31.678043200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:31:31.678074 containerd[1483]: time="2025-03-17T17:31:31.678058200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:31:31.678074 containerd[1483]: time="2025-03-17T17:31:31.678071040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:31:31.678148 containerd[1483]: time="2025-03-17T17:31:31.678085720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:31:31.678148 containerd[1483]: time="2025-03-17T17:31:31.678104200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:31:31.678148 containerd[1483]: time="2025-03-17T17:31:31.678118280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:31:31.678148 containerd[1483]: time="2025-03-17T17:31:31.678129560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:31:31.678212 containerd[1483]: time="2025-03-17T17:31:31.678148760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678212 containerd[1483]: time="2025-03-17T17:31:31.678162360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678212 containerd[1483]: time="2025-03-17T17:31:31.678174040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678212 containerd[1483]: time="2025-03-17T17:31:31.678185600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678212 containerd[1483]: time="2025-03-17T17:31:31.678197200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678212 containerd[1483]: time="2025-03-17T17:31:31.678209080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678310 containerd[1483]: time="2025-03-17T17:31:31.678220280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678310 containerd[1483]: time="2025-03-17T17:31:31.678237400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678310 containerd[1483]: time="2025-03-17T17:31:31.678250040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678310 containerd[1483]: time="2025-03-17T17:31:31.678264760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678310 containerd[1483]: time="2025-03-17T17:31:31.678276120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678310 containerd[1483]: time="2025-03-17T17:31:31.678287040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678310 containerd[1483]: time="2025-03-17T17:31:31.678298520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678420 containerd[1483]: time="2025-03-17T17:31:31.678314440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:31:31.678420 containerd[1483]: time="2025-03-17T17:31:31.678334720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678420 containerd[1483]: time="2025-03-17T17:31:31.678350680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678420 containerd[1483]: time="2025-03-17T17:31:31.678361400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:31:31.678586 containerd[1483]: time="2025-03-17T17:31:31.678524280Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:31:31.678639 containerd[1483]: time="2025-03-17T17:31:31.678623800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:31:31.678722 containerd[1483]: time="2025-03-17T17:31:31.678637440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:31:31.678722 containerd[1483]: time="2025-03-17T17:31:31.678652880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:31:31.678722 containerd[1483]: time="2025-03-17T17:31:31.678662440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.678722 containerd[1483]: time="2025-03-17T17:31:31.678687560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:31:31.678722 containerd[1483]: time="2025-03-17T17:31:31.678697960Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:31:31.678722 containerd[1483]: time="2025-03-17T17:31:31.678708440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:31:31.679101 containerd[1483]: time="2025-03-17T17:31:31.679053400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:31:31.679218 containerd[1483]: time="2025-03-17T17:31:31.679108120Z" level=info msg="Connect containerd service" Mar 17 17:31:31.679218 containerd[1483]: time="2025-03-17T17:31:31.679145000Z" level=info msg="using legacy CRI server" Mar 17 17:31:31.679218 containerd[1483]: time="2025-03-17T17:31:31.679151320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:31:31.681191 containerd[1483]: time="2025-03-17T17:31:31.681166240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:31:31.681933 containerd[1483]: time="2025-03-17T17:31:31.681880000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:31:31.682096 containerd[1483]: time="2025-03-17T17:31:31.682070120Z" level=info msg="Start subscribing containerd event" Mar 17 17:31:31.682328 containerd[1483]: time="2025-03-17T17:31:31.682113600Z" level=info msg="Start recovering state" Mar 17 17:31:31.682328 containerd[1483]: time="2025-03-17T17:31:31.682171560Z" level=info msg="Start event monitor" Mar 17 17:31:31.682328 containerd[1483]: time="2025-03-17T17:31:31.682182240Z" level=info msg="Start snapshots syncer" Mar 17 17:31:31.682328 containerd[1483]: time="2025-03-17T17:31:31.682191680Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:31:31.682328 containerd[1483]: time="2025-03-17T17:31:31.682198800Z" level=info msg="Start streaming server" Mar 17 17:31:31.682789 containerd[1483]: time="2025-03-17T17:31:31.682767320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:31:31.682890 containerd[1483]: time="2025-03-17T17:31:31.682867440Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:31:31.683020 containerd[1483]: time="2025-03-17T17:31:31.682925400Z" level=info msg="containerd successfully booted in 0.099098s" Mar 17 17:31:31.683118 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:31:31.780724 tar[1478]: linux-arm64/LICENSE Mar 17 17:31:31.780942 tar[1478]: linux-arm64/README.md Mar 17 17:31:31.799826 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:31:31.879263 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:31:31.896816 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:31:31.918409 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:31:31.923415 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:31:31.923637 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:31:31.926912 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:31:31.937416 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:31:31.940174 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:31:31.942272 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:31:31.943693 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:31:33.071870 systemd-networkd[1411]: eth0: Gained IPv6LL Mar 17 17:31:33.074602 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:31:33.076311 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:31:33.085099 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:31:33.087499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:31:33.089635 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:31:33.103695 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:31:33.104546 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:31:33.106677 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:31:33.116975 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:31:33.571850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:31:33.573213 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:31:33.575832 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:31:33.579680 systemd[1]: Startup finished in 538ms (kernel) + 5.827s (initrd) + 4.093s (userspace) = 10.458s. Mar 17 17:31:34.017599 kubelet[1572]: E0317 17:31:34.017482 1572 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:31:34.019846 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:31:34.019985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:31:34.021840 systemd[1]: kubelet.service: Consumed 789ms CPU time, 241.3M memory peak. Mar 17 17:31:36.104207 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:31:36.105369 systemd[1]: Started sshd@0-10.0.0.45:22-10.0.0.1:48436.service - OpenSSH per-connection server daemon (10.0.0.1:48436). Mar 17 17:31:36.168584 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 48436 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:31:36.170336 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:36.176197 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:31:36.191044 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:31:36.196321 systemd-logind[1468]: New session 1 of user core. Mar 17 17:31:36.199911 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:31:36.203054 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:31:36.208685 (systemd)[1591]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:31:36.210861 systemd-logind[1468]: New session c1 of user core. Mar 17 17:31:36.301782 systemd[1591]: Queued start job for default target default.target. Mar 17 17:31:36.310641 systemd[1591]: Created slice app.slice - User Application Slice. Mar 17 17:31:36.310670 systemd[1591]: Reached target paths.target - Paths. Mar 17 17:31:36.310707 systemd[1591]: Reached target timers.target - Timers. Mar 17 17:31:36.311934 systemd[1591]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:31:36.321057 systemd[1591]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:31:36.321124 systemd[1591]: Reached target sockets.target - Sockets. Mar 17 17:31:36.321164 systemd[1591]: Reached target basic.target - Basic System. Mar 17 17:31:36.321192 systemd[1591]: Reached target default.target - Main User Target. Mar 17 17:31:36.321218 systemd[1591]: Startup finished in 104ms. Mar 17 17:31:36.321395 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:31:36.322826 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:31:36.376984 systemd[1]: Started sshd@1-10.0.0.45:22-10.0.0.1:48452.service - OpenSSH per-connection server daemon (10.0.0.1:48452). Mar 17 17:31:36.420450 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 48452 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:31:36.421772 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:36.426464 systemd-logind[1468]: New session 2 of user core. Mar 17 17:31:36.437913 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:31:36.489385 sshd[1604]: Connection closed by 10.0.0.1 port 48452 Mar 17 17:31:36.489708 sshd-session[1602]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:36.504940 systemd[1]: sshd@1-10.0.0.45:22-10.0.0.1:48452.service: Deactivated successfully. Mar 17 17:31:36.506473 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:31:36.507145 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:31:36.508832 systemd[1]: Started sshd@2-10.0.0.45:22-10.0.0.1:48464.service - OpenSSH per-connection server daemon (10.0.0.1:48464). Mar 17 17:31:36.510072 systemd-logind[1468]: Removed session 2. Mar 17 17:31:36.549825 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 48464 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:31:36.551137 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:36.555605 systemd-logind[1468]: New session 3 of user core. Mar 17 17:31:36.564992 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:31:36.612351 sshd[1612]: Connection closed by 10.0.0.1 port 48464 Mar 17 17:31:36.612213 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:36.621825 systemd[1]: sshd@2-10.0.0.45:22-10.0.0.1:48464.service: Deactivated successfully. Mar 17 17:31:36.623208 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:31:36.623978 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:31:36.630996 systemd[1]: Started sshd@3-10.0.0.45:22-10.0.0.1:48470.service - OpenSSH per-connection server daemon (10.0.0.1:48470). Mar 17 17:31:36.632027 systemd-logind[1468]: Removed session 3. Mar 17 17:31:36.669113 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 48470 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:31:36.670431 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:36.674903 systemd-logind[1468]: New session 4 of user core. Mar 17 17:31:36.692902 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:31:36.743617 sshd[1620]: Connection closed by 10.0.0.1 port 48470 Mar 17 17:31:36.747349 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:36.761950 systemd[1]: sshd@3-10.0.0.45:22-10.0.0.1:48470.service: Deactivated successfully. Mar 17 17:31:36.763521 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:31:36.764197 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:31:36.775023 systemd[1]: Started sshd@4-10.0.0.45:22-10.0.0.1:48484.service - OpenSSH per-connection server daemon (10.0.0.1:48484). Mar 17 17:31:36.775950 systemd-logind[1468]: Removed session 4. Mar 17 17:31:36.813121 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 48484 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:31:36.814292 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:36.818070 systemd-logind[1468]: New session 5 of user core. Mar 17 17:31:36.829912 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:31:36.886482 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:31:36.886812 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:31:36.903561 sudo[1629]: pam_unix(sudo:session): session closed for user root Mar 17 17:31:36.905646 sshd[1628]: Connection closed by 10.0.0.1 port 48484 Mar 17 17:31:36.905438 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:36.916988 systemd[1]: sshd@4-10.0.0.45:22-10.0.0.1:48484.service: Deactivated successfully. Mar 17 17:31:36.918553 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:31:36.919349 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:31:36.930030 systemd[1]: Started sshd@5-10.0.0.45:22-10.0.0.1:48496.service - OpenSSH per-connection server daemon (10.0.0.1:48496). Mar 17 17:31:36.930889 systemd-logind[1468]: Removed session 5. Mar 17 17:31:36.971382 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 48496 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:31:36.972646 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:36.976811 systemd-logind[1468]: New session 6 of user core. Mar 17 17:31:36.984886 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:31:37.034372 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:31:37.034657 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:31:37.038040 sudo[1639]: pam_unix(sudo:session): session closed for user root Mar 17 17:31:37.042835 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:31:37.043108 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:31:37.062115 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:31:37.085207 augenrules[1661]: No rules Mar 17 17:31:37.086308 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:31:37.086513 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:31:37.087631 sudo[1638]: pam_unix(sudo:session): session closed for user root Mar 17 17:31:37.088962 sshd[1637]: Connection closed by 10.0.0.1 port 48496 Mar 17 17:31:37.089358 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:37.101126 systemd[1]: sshd@5-10.0.0.45:22-10.0.0.1:48496.service: Deactivated successfully. Mar 17 17:31:37.102633 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:31:37.103328 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:31:37.105197 systemd[1]: Started sshd@6-10.0.0.45:22-10.0.0.1:48508.service - OpenSSH per-connection server daemon (10.0.0.1:48508). Mar 17 17:31:37.105962 systemd-logind[1468]: Removed session 6. Mar 17 17:31:37.146650 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 48508 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:31:37.148080 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:37.151796 systemd-logind[1468]: New session 7 of user core. Mar 17 17:31:37.161931 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:31:37.211495 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:31:37.211805 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:31:37.544056 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:31:37.544151 (dockerd)[1693]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:31:37.792772 dockerd[1693]: time="2025-03-17T17:31:37.792618857Z" level=info msg="Starting up" Mar 17 17:31:38.023959 dockerd[1693]: time="2025-03-17T17:31:38.023841249Z" level=info msg="Loading containers: start." Mar 17 17:31:38.162747 kernel: Initializing XFRM netlink socket Mar 17 17:31:38.231792 systemd-networkd[1411]: docker0: Link UP Mar 17 17:31:38.271329 dockerd[1693]: time="2025-03-17T17:31:38.271276005Z" level=info msg="Loading containers: done." Mar 17 17:31:38.315666 dockerd[1693]: time="2025-03-17T17:31:38.315548021Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:31:38.315830 dockerd[1693]: time="2025-03-17T17:31:38.315672547Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:31:38.315914 dockerd[1693]: time="2025-03-17T17:31:38.315876301Z" level=info msg="Daemon has completed initialization" Mar 17 17:31:38.437283 dockerd[1693]: time="2025-03-17T17:31:38.437222733Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:31:38.437398 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:31:39.104633 containerd[1483]: time="2025-03-17T17:31:39.104593857Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:31:39.672533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3175102470.mount: Deactivated successfully. Mar 17 17:31:40.725109 containerd[1483]: time="2025-03-17T17:31:40.724938348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:40.725965 containerd[1483]: time="2025-03-17T17:31:40.725678044Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793526" Mar 17 17:31:40.726784 containerd[1483]: time="2025-03-17T17:31:40.726755956Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:40.730688 containerd[1483]: time="2025-03-17T17:31:40.730621287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:40.732859 containerd[1483]: time="2025-03-17T17:31:40.732238017Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 1.627602258s" Mar 17 17:31:40.732859 containerd[1483]: time="2025-03-17T17:31:40.732286583Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 17:31:40.751742 containerd[1483]: time="2025-03-17T17:31:40.751688855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:31:42.007095 containerd[1483]: time="2025-03-17T17:31:42.007046377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:42.007886 containerd[1483]: time="2025-03-17T17:31:42.007846290Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861169" Mar 17 17:31:42.008762 containerd[1483]: time="2025-03-17T17:31:42.008717391Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:42.011784 containerd[1483]: time="2025-03-17T17:31:42.011754335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:42.013899 containerd[1483]: time="2025-03-17T17:31:42.013818099Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 1.262082064s" Mar 17 17:31:42.013899 containerd[1483]: time="2025-03-17T17:31:42.013853753Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 17:31:42.034180 containerd[1483]: time="2025-03-17T17:31:42.034113649Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:31:42.892838 containerd[1483]: time="2025-03-17T17:31:42.892775766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:42.893450 containerd[1483]: time="2025-03-17T17:31:42.893394947Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264638" Mar 17 17:31:42.894131 containerd[1483]: time="2025-03-17T17:31:42.894098061Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:42.897801 containerd[1483]: time="2025-03-17T17:31:42.897759784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:42.898845 containerd[1483]: time="2025-03-17T17:31:42.898805066Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 864.65203ms" Mar 17 17:31:42.898845 containerd[1483]: time="2025-03-17T17:31:42.898840482Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 17:31:42.917494 containerd[1483]: time="2025-03-17T17:31:42.917459897Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:31:43.850436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159445477.mount: Deactivated successfully. Mar 17 17:31:44.047318 containerd[1483]: time="2025-03-17T17:31:44.047169158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:44.048126 containerd[1483]: time="2025-03-17T17:31:44.047936264Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771850" Mar 17 17:31:44.048819 containerd[1483]: time="2025-03-17T17:31:44.048787809Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:44.051331 containerd[1483]: time="2025-03-17T17:31:44.051302073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:44.052478 containerd[1483]: time="2025-03-17T17:31:44.052398820Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.1349027s" Mar 17 17:31:44.052478 containerd[1483]: time="2025-03-17T17:31:44.052428689Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 17:31:44.071921 containerd[1483]: time="2025-03-17T17:31:44.071869960Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:31:44.270326 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:31:44.280929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:31:44.377789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:31:44.381579 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:31:44.432031 kubelet[2004]: E0317 17:31:44.431920 2004 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:31:44.435078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:31:44.435242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:31:44.436843 systemd[1]: kubelet.service: Consumed 139ms CPU time, 98M memory peak. Mar 17 17:31:44.632023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount98895017.mount: Deactivated successfully. Mar 17 17:31:45.324878 containerd[1483]: time="2025-03-17T17:31:45.324821776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:45.325455 containerd[1483]: time="2025-03-17T17:31:45.325403317Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 17 17:31:45.326519 containerd[1483]: time="2025-03-17T17:31:45.326485203Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:45.335499 containerd[1483]: time="2025-03-17T17:31:45.333113916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:45.335499 containerd[1483]: time="2025-03-17T17:31:45.335256713Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.263343392s" Mar 17 17:31:45.335499 containerd[1483]: time="2025-03-17T17:31:45.335297828Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 17:31:45.354458 containerd[1483]: time="2025-03-17T17:31:45.354222201Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:31:45.799698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3530716580.mount: Deactivated successfully. Mar 17 17:31:45.803215 containerd[1483]: time="2025-03-17T17:31:45.803165999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:45.803650 containerd[1483]: time="2025-03-17T17:31:45.803602622Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Mar 17 17:31:45.804544 containerd[1483]: time="2025-03-17T17:31:45.804516784Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:45.806698 containerd[1483]: time="2025-03-17T17:31:45.806659343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:45.807668 containerd[1483]: time="2025-03-17T17:31:45.807623416Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 453.365514ms" Mar 17 17:31:45.807668 containerd[1483]: time="2025-03-17T17:31:45.807658362Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 17:31:45.826439 containerd[1483]: time="2025-03-17T17:31:45.826402354Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:31:46.276774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount906646181.mount: Deactivated successfully. Mar 17 17:31:47.638382 containerd[1483]: time="2025-03-17T17:31:47.638333449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:47.639409 containerd[1483]: time="2025-03-17T17:31:47.639328734Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Mar 17 17:31:47.640065 containerd[1483]: time="2025-03-17T17:31:47.640033448Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:47.643482 containerd[1483]: time="2025-03-17T17:31:47.643421012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:31:47.644841 containerd[1483]: time="2025-03-17T17:31:47.644806450Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 1.818201495s" Mar 17 17:31:47.644878 containerd[1483]: time="2025-03-17T17:31:47.644841357Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 17:31:52.875780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:31:52.875938 systemd[1]: kubelet.service: Consumed 139ms CPU time, 98M memory peak. Mar 17 17:31:52.889992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:31:52.907227 systemd[1]: Reload requested from client PID 2199 ('systemctl') (unit session-7.scope)... Mar 17 17:31:52.907245 systemd[1]: Reloading... Mar 17 17:31:52.978762 zram_generator::config[2241]: No configuration found. Mar 17 17:31:53.108401 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:31:53.181302 systemd[1]: Reloading finished in 273 ms. Mar 17 17:31:53.228993 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:31:53.231967 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:31:53.232833 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:31:53.233038 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:31:53.233082 systemd[1]: kubelet.service: Consumed 79ms CPU time, 82.4M memory peak. Mar 17 17:31:53.234630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:31:53.326645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:31:53.331005 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:31:53.375525 kubelet[2290]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:31:53.375525 kubelet[2290]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:31:53.375525 kubelet[2290]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:31:53.376647 kubelet[2290]: I0317 17:31:53.376592 2290 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:31:53.817433 kubelet[2290]: I0317 17:31:53.817397 2290 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:31:53.818759 kubelet[2290]: I0317 17:31:53.817571 2290 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:31:53.818759 kubelet[2290]: I0317 17:31:53.817836 2290 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:31:53.848886 kubelet[2290]: E0317 17:31:53.848843 2290 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:53.849255 kubelet[2290]: I0317 17:31:53.849126 2290 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:31:53.858174 kubelet[2290]: I0317 17:31:53.858135 2290 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:31:53.859710 kubelet[2290]: I0317 17:31:53.859194 2290 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:31:53.859710 kubelet[2290]: I0317 17:31:53.859239 2290 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:31:53.859710 kubelet[2290]: I0317 17:31:53.859579 2290 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:31:53.859710 kubelet[2290]: I0317 17:31:53.859591 2290 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:31:53.861077 kubelet[2290]: I0317 17:31:53.860816 2290 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:31:53.861990 kubelet[2290]: I0317 17:31:53.861963 2290 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:31:53.861990 kubelet[2290]: I0317 17:31:53.861986 2290 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:31:53.862848 kubelet[2290]: I0317 17:31:53.862187 2290 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:31:53.862848 kubelet[2290]: I0317 17:31:53.862360 2290 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:31:53.863111 kubelet[2290]: W0317 17:31:53.863057 2290 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:53.863166 kubelet[2290]: E0317 17:31:53.863123 2290 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:53.863166 kubelet[2290]: W0317 17:31:53.863057 2290 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:53.863166 kubelet[2290]: E0317 17:31:53.863146 2290 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:53.864070 kubelet[2290]: I0317 17:31:53.864042 2290 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:31:53.864701 kubelet[2290]: I0317 17:31:53.864682 2290 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:31:53.864852 kubelet[2290]: W0317 17:31:53.864838 2290 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:31:53.866317 kubelet[2290]: I0317 17:31:53.865983 2290 server.go:1264] "Started kubelet" Mar 17 17:31:53.866634 kubelet[2290]: I0317 17:31:53.866602 2290 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:31:53.868154 kubelet[2290]: I0317 17:31:53.868040 2290 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:31:53.869110 kubelet[2290]: E0317 17:31:53.868791 2290 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.45:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.45:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da76c2a0b7571 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:31:53.865950577 +0000 UTC m=+0.531906427,LastTimestamp:2025-03-17 17:31:53.865950577 +0000 UTC m=+0.531906427,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:31:53.870613 kubelet[2290]: I0317 17:31:53.870540 2290 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:31:53.871018 kubelet[2290]: I0317 17:31:53.870786 2290 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:31:53.871018 kubelet[2290]: I0317 17:31:53.870866 2290 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:31:53.873051 kubelet[2290]: I0317 17:31:53.872091 2290 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:31:53.874407 kubelet[2290]: I0317 17:31:53.874380 2290 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:31:53.874773 kubelet[2290]: E0317 17:31:53.874748 2290 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:31:53.875064 kubelet[2290]: W0317 17:31:53.875014 2290 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:53.875125 kubelet[2290]: E0317 17:31:53.875074 2290 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:53.875549 kubelet[2290]: E0317 17:31:53.875511 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="200ms" Mar 17 17:31:53.875799 kubelet[2290]: I0317 17:31:53.875777 2290 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:31:53.876182 kubelet[2290]: I0317 17:31:53.876156 2290 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:31:53.876282 kubelet[2290]: I0317 17:31:53.876260 2290 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:31:53.876650 kubelet[2290]: E0317 17:31:53.876555 2290 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:31:53.878051 kubelet[2290]: I0317 17:31:53.878020 2290 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:31:53.888985 kubelet[2290]: I0317 17:31:53.888845 2290 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:31:53.888985 kubelet[2290]: I0317 17:31:53.888860 2290 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:31:53.888985 kubelet[2290]: I0317 17:31:53.888880 2290 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:31:53.890746 kubelet[2290]: I0317 17:31:53.890690 2290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:31:53.891953 kubelet[2290]: I0317 17:31:53.891901 2290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:31:53.892098 kubelet[2290]: I0317 17:31:53.892066 2290 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:31:53.892098 kubelet[2290]: I0317 17:31:53.892089 2290 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:31:53.892173 kubelet[2290]: E0317 17:31:53.892141 2290 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:31:53.976681 kubelet[2290]: I0317 17:31:53.976612 2290 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:31:53.976997 kubelet[2290]: E0317 17:31:53.976958 2290 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Mar 17 17:31:53.993184 kubelet[2290]: E0317 17:31:53.993146 2290 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:31:54.022219 kubelet[2290]: W0317 17:31:54.022163 2290 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:54.022274 kubelet[2290]: E0317 17:31:54.022223 2290 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:54.026242 kubelet[2290]: I0317 17:31:54.026140 2290 policy_none.go:49] "None policy: Start" Mar 17 17:31:54.026911 kubelet[2290]: I0317 17:31:54.026883 2290 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:31:54.026911 kubelet[2290]: I0317 17:31:54.026915 2290 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:31:54.034419 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:31:54.052554 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:31:54.055243 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:31:54.065616 kubelet[2290]: I0317 17:31:54.065577 2290 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:31:54.066048 kubelet[2290]: I0317 17:31:54.065820 2290 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:31:54.066048 kubelet[2290]: I0317 17:31:54.065950 2290 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:31:54.068526 kubelet[2290]: E0317 17:31:54.067647 2290 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:31:54.076555 kubelet[2290]: E0317 17:31:54.076495 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="400ms" Mar 17 17:31:54.179102 kubelet[2290]: I0317 17:31:54.179056 2290 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:31:54.179490 kubelet[2290]: E0317 17:31:54.179453 2290 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Mar 17 17:31:54.193653 kubelet[2290]: I0317 17:31:54.193599 2290 topology_manager.go:215] "Topology Admit Handler" podUID="74535a6be397f1607fafe6f52c01e895" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 17:31:54.194716 kubelet[2290]: I0317 17:31:54.194643 2290 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 17:31:54.195560 kubelet[2290]: I0317 17:31:54.195497 2290 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 17:31:54.205010 systemd[1]: Created slice kubepods-burstable-pod74535a6be397f1607fafe6f52c01e895.slice - libcontainer container kubepods-burstable-pod74535a6be397f1607fafe6f52c01e895.slice. Mar 17 17:31:54.215307 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Mar 17 17:31:54.219201 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Mar 17 17:31:54.277268 kubelet[2290]: I0317 17:31:54.277228 2290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74535a6be397f1607fafe6f52c01e895-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"74535a6be397f1607fafe6f52c01e895\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:31:54.277268 kubelet[2290]: I0317 17:31:54.277266 2290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74535a6be397f1607fafe6f52c01e895-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"74535a6be397f1607fafe6f52c01e895\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:31:54.277421 kubelet[2290]: I0317 17:31:54.277286 2290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:31:54.277421 kubelet[2290]: I0317 17:31:54.277301 2290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:31:54.277421 kubelet[2290]: I0317 17:31:54.277321 2290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:31:54.277421 kubelet[2290]: I0317 17:31:54.277343 2290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74535a6be397f1607fafe6f52c01e895-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"74535a6be397f1607fafe6f52c01e895\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:31:54.277421 kubelet[2290]: I0317 17:31:54.277362 2290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:31:54.277523 kubelet[2290]: I0317 17:31:54.277377 2290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:31:54.277523 kubelet[2290]: I0317 17:31:54.277420 2290 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:31:54.477616 kubelet[2290]: E0317 17:31:54.477483 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="800ms" Mar 17 17:31:54.514722 containerd[1483]: time="2025-03-17T17:31:54.514598103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:74535a6be397f1607fafe6f52c01e895,Namespace:kube-system,Attempt:0,}" Mar 17 17:31:54.518466 containerd[1483]: time="2025-03-17T17:31:54.518428901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 17 17:31:54.522187 containerd[1483]: time="2025-03-17T17:31:54.522100617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 17 17:31:54.580967 kubelet[2290]: I0317 17:31:54.580694 2290 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:31:54.581099 kubelet[2290]: E0317 17:31:54.581011 2290 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Mar 17 17:31:54.966163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1593948745.mount: Deactivated successfully. Mar 17 17:31:54.973983 containerd[1483]: time="2025-03-17T17:31:54.973797270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:31:54.977243 containerd[1483]: time="2025-03-17T17:31:54.977178619Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Mar 17 17:31:54.978215 containerd[1483]: time="2025-03-17T17:31:54.978180123Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:31:54.979276 containerd[1483]: time="2025-03-17T17:31:54.979246810Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:31:54.980129 containerd[1483]: time="2025-03-17T17:31:54.980089871Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:31:54.981100 containerd[1483]: time="2025-03-17T17:31:54.980986891Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:31:54.981754 containerd[1483]: time="2025-03-17T17:31:54.981608362Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:31:54.983162 containerd[1483]: time="2025-03-17T17:31:54.983092505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:31:54.985839 containerd[1483]: time="2025-03-17T17:31:54.985795029Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 471.11337ms" Mar 17 17:31:54.986427 kubelet[2290]: W0317 17:31:54.986374 2290 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:54.986503 kubelet[2290]: E0317 17:31:54.986438 2290 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:54.987243 containerd[1483]: time="2025-03-17T17:31:54.987165661Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 464.990555ms" Mar 17 17:31:54.988420 containerd[1483]: time="2025-03-17T17:31:54.988386677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 469.877256ms" Mar 17 17:31:55.014966 kubelet[2290]: W0317 17:31:55.014898 2290 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:55.015140 kubelet[2290]: E0317 17:31:55.015125 2290 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:55.141828 containerd[1483]: time="2025-03-17T17:31:55.141680772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:31:55.141828 containerd[1483]: time="2025-03-17T17:31:55.141771134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:31:55.141828 containerd[1483]: time="2025-03-17T17:31:55.141787872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:31:55.143133 containerd[1483]: time="2025-03-17T17:31:55.142843532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:31:55.143992 containerd[1483]: time="2025-03-17T17:31:55.143825489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:31:55.143992 containerd[1483]: time="2025-03-17T17:31:55.143870071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:31:55.143992 containerd[1483]: time="2025-03-17T17:31:55.143881296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:31:55.144098 containerd[1483]: time="2025-03-17T17:31:55.143955360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:31:55.148563 containerd[1483]: time="2025-03-17T17:31:55.147384478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:31:55.148563 containerd[1483]: time="2025-03-17T17:31:55.148525188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:31:55.148563 containerd[1483]: time="2025-03-17T17:31:55.148548198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:31:55.148942 containerd[1483]: time="2025-03-17T17:31:55.148861708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:31:55.169937 systemd[1]: Started cri-containerd-759a21059990ad2537bbe8dbdaca6dbc64336cb6bc581e78b8ff3db824b21b08.scope - libcontainer container 759a21059990ad2537bbe8dbdaca6dbc64336cb6bc581e78b8ff3db824b21b08. Mar 17 17:31:55.174572 systemd[1]: Started cri-containerd-b2b7f0d043d23bdef12e91d97f2a72a04b0eb438dad91ea106df70809997e6d1.scope - libcontainer container b2b7f0d043d23bdef12e91d97f2a72a04b0eb438dad91ea106df70809997e6d1. Mar 17 17:31:55.176000 systemd[1]: Started cri-containerd-fcc108cf8661b19ecb6116d0b322f76a19421a155ab006183ffa5353130277fd.scope - libcontainer container fcc108cf8661b19ecb6116d0b322f76a19421a155ab006183ffa5353130277fd. Mar 17 17:31:55.201660 containerd[1483]: time="2025-03-17T17:31:55.201353514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"759a21059990ad2537bbe8dbdaca6dbc64336cb6bc581e78b8ff3db824b21b08\"" Mar 17 17:31:55.205895 containerd[1483]: time="2025-03-17T17:31:55.205790476Z" level=info msg="CreateContainer within sandbox \"759a21059990ad2537bbe8dbdaca6dbc64336cb6bc581e78b8ff3db824b21b08\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:31:55.208914 containerd[1483]: time="2025-03-17T17:31:55.208882316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2b7f0d043d23bdef12e91d97f2a72a04b0eb438dad91ea106df70809997e6d1\"" Mar 17 17:31:55.211952 containerd[1483]: time="2025-03-17T17:31:55.211919946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:74535a6be397f1607fafe6f52c01e895,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcc108cf8661b19ecb6116d0b322f76a19421a155ab006183ffa5353130277fd\"" Mar 17 17:31:55.212391 containerd[1483]: time="2025-03-17T17:31:55.212325816Z" level=info msg="CreateContainer within sandbox \"b2b7f0d043d23bdef12e91d97f2a72a04b0eb438dad91ea106df70809997e6d1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:31:55.215627 containerd[1483]: time="2025-03-17T17:31:55.215584238Z" level=info msg="CreateContainer within sandbox \"fcc108cf8661b19ecb6116d0b322f76a19421a155ab006183ffa5353130277fd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:31:55.219581 kubelet[2290]: W0317 17:31:55.219430 2290 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:55.219581 kubelet[2290]: E0317 17:31:55.219524 2290 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:55.223300 containerd[1483]: time="2025-03-17T17:31:55.223247784Z" level=info msg="CreateContainer within sandbox \"759a21059990ad2537bbe8dbdaca6dbc64336cb6bc581e78b8ff3db824b21b08\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"019e0beb0fc25358319ddfd250df2eb05756ef3bab6d793c77f73e5ae6b1ac82\"" Mar 17 17:31:55.224020 containerd[1483]: time="2025-03-17T17:31:55.223993529Z" level=info msg="StartContainer for \"019e0beb0fc25358319ddfd250df2eb05756ef3bab6d793c77f73e5ae6b1ac82\"" Mar 17 17:31:55.232647 containerd[1483]: time="2025-03-17T17:31:55.232596527Z" level=info msg="CreateContainer within sandbox \"b2b7f0d043d23bdef12e91d97f2a72a04b0eb438dad91ea106df70809997e6d1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7f05aae0c3dd643a78927dfc3545e0c5acaca990f05890c9bf2f546f41b25c01\"" Mar 17 17:31:55.233236 containerd[1483]: time="2025-03-17T17:31:55.233214240Z" level=info msg="StartContainer for \"7f05aae0c3dd643a78927dfc3545e0c5acaca990f05890c9bf2f546f41b25c01\"" Mar 17 17:31:55.234338 containerd[1483]: time="2025-03-17T17:31:55.234300620Z" level=info msg="CreateContainer within sandbox \"fcc108cf8661b19ecb6116d0b322f76a19421a155ab006183ffa5353130277fd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f98f4df2f64c65691684caea87c249a3c332be3d88ac12b80a67206c799ef56\"" Mar 17 17:31:55.234845 containerd[1483]: time="2025-03-17T17:31:55.234719273Z" level=info msg="StartContainer for \"5f98f4df2f64c65691684caea87c249a3c332be3d88ac12b80a67206c799ef56\"" Mar 17 17:31:55.248540 systemd[1]: Started cri-containerd-019e0beb0fc25358319ddfd250df2eb05756ef3bab6d793c77f73e5ae6b1ac82.scope - libcontainer container 019e0beb0fc25358319ddfd250df2eb05756ef3bab6d793c77f73e5ae6b1ac82. Mar 17 17:31:55.267935 systemd[1]: Started cri-containerd-5f98f4df2f64c65691684caea87c249a3c332be3d88ac12b80a67206c799ef56.scope - libcontainer container 5f98f4df2f64c65691684caea87c249a3c332be3d88ac12b80a67206c799ef56. Mar 17 17:31:55.268992 systemd[1]: Started cri-containerd-7f05aae0c3dd643a78927dfc3545e0c5acaca990f05890c9bf2f546f41b25c01.scope - libcontainer container 7f05aae0c3dd643a78927dfc3545e0c5acaca990f05890c9bf2f546f41b25c01. Mar 17 17:31:55.280516 kubelet[2290]: E0317 17:31:55.280459 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="1.6s" Mar 17 17:31:55.296811 containerd[1483]: time="2025-03-17T17:31:55.296766432Z" level=info msg="StartContainer for \"019e0beb0fc25358319ddfd250df2eb05756ef3bab6d793c77f73e5ae6b1ac82\" returns successfully" Mar 17 17:31:55.338940 containerd[1483]: time="2025-03-17T17:31:55.333367763Z" level=info msg="StartContainer for \"5f98f4df2f64c65691684caea87c249a3c332be3d88ac12b80a67206c799ef56\" returns successfully" Mar 17 17:31:55.338940 containerd[1483]: time="2025-03-17T17:31:55.333593988Z" level=info msg="StartContainer for \"7f05aae0c3dd643a78927dfc3545e0c5acaca990f05890c9bf2f546f41b25c01\" returns successfully" Mar 17 17:31:55.363409 kubelet[2290]: W0317 17:31:55.358504 2290 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:55.363409 kubelet[2290]: E0317 17:31:55.358573 2290 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Mar 17 17:31:55.387554 kubelet[2290]: I0317 17:31:55.387182 2290 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:31:55.387554 kubelet[2290]: E0317 17:31:55.387520 2290 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Mar 17 17:31:56.963218 kubelet[2290]: E0317 17:31:56.963162 2290 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 17:31:56.988798 kubelet[2290]: I0317 17:31:56.988768 2290 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:31:57.071070 kubelet[2290]: E0317 17:31:57.070648 2290 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.182da76c2a0b7571 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:31:53.865950577 +0000 UTC m=+0.531906427,LastTimestamp:2025-03-17 17:31:53.865950577 +0000 UTC m=+0.531906427,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:31:57.097027 kubelet[2290]: I0317 17:31:57.096988 2290 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 17:31:57.122172 kubelet[2290]: E0317 17:31:57.122128 2290 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:31:57.124217 kubelet[2290]: E0317 17:31:57.124099 2290 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.182da76c2aad18ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:31:53.87654369 +0000 UTC m=+0.542499540,LastTimestamp:2025-03-17 17:31:53.87654369 +0000 UTC m=+0.542499540,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:31:57.222659 kubelet[2290]: E0317 17:31:57.222553 2290 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:31:57.323697 kubelet[2290]: E0317 17:31:57.323654 2290 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:31:57.424416 kubelet[2290]: E0317 17:31:57.424350 2290 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:31:57.525215 kubelet[2290]: E0317 17:31:57.525096 2290 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:31:57.625669 kubelet[2290]: E0317 17:31:57.625620 2290 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:31:57.726367 kubelet[2290]: E0317 17:31:57.726321 2290 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:31:57.827079 kubelet[2290]: E0317 17:31:57.826968 2290 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:31:57.928053 kubelet[2290]: E0317 17:31:57.928012 2290 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:31:58.813426 systemd[1]: Reload requested from client PID 2572 ('systemctl') (unit session-7.scope)... Mar 17 17:31:58.813442 systemd[1]: Reloading... Mar 17 17:31:58.865186 kubelet[2290]: I0317 17:31:58.865155 2290 apiserver.go:52] "Watching apiserver" Mar 17 17:31:58.875352 kubelet[2290]: I0317 17:31:58.875291 2290 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:31:58.901759 zram_generator::config[2617]: No configuration found. Mar 17 17:31:58.989292 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:31:59.074465 systemd[1]: Reloading finished in 260 ms. Mar 17 17:31:59.097533 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:31:59.118383 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:31:59.118619 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:31:59.118668 systemd[1]: kubelet.service: Consumed 903ms CPU time, 114.4M memory peak. Mar 17 17:31:59.126972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:31:59.224285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:31:59.227984 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:31:59.281896 kubelet[2658]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:31:59.281896 kubelet[2658]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:31:59.281896 kubelet[2658]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:31:59.282249 kubelet[2658]: I0317 17:31:59.281935 2658 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:31:59.285930 kubelet[2658]: I0317 17:31:59.285901 2658 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:31:59.285930 kubelet[2658]: I0317 17:31:59.285924 2658 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:31:59.286178 kubelet[2658]: I0317 17:31:59.286152 2658 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:31:59.287665 kubelet[2658]: I0317 17:31:59.287640 2658 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:31:59.289078 kubelet[2658]: I0317 17:31:59.289000 2658 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:31:59.296613 kubelet[2658]: I0317 17:31:59.296583 2658 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:31:59.296822 kubelet[2658]: I0317 17:31:59.296785 2658 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:31:59.296978 kubelet[2658]: I0317 17:31:59.296809 2658 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:31:59.296978 kubelet[2658]: I0317 17:31:59.296975 2658 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:31:59.297078 kubelet[2658]: I0317 17:31:59.296983 2658 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:31:59.297078 kubelet[2658]: I0317 17:31:59.297013 2658 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:31:59.297118 kubelet[2658]: I0317 17:31:59.297096 2658 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:31:59.297118 kubelet[2658]: I0317 17:31:59.297109 2658 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:31:59.297165 kubelet[2658]: I0317 17:31:59.297135 2658 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:31:59.297165 kubelet[2658]: I0317 17:31:59.297151 2658 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:31:59.298923 kubelet[2658]: I0317 17:31:59.298816 2658 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:31:59.299013 kubelet[2658]: I0317 17:31:59.298971 2658 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:31:59.299356 kubelet[2658]: I0317 17:31:59.299332 2658 server.go:1264] "Started kubelet" Mar 17 17:31:59.299615 kubelet[2658]: I0317 17:31:59.299487 2658 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:31:59.302736 kubelet[2658]: I0317 17:31:59.300317 2658 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:31:59.302736 kubelet[2658]: E0317 17:31:59.302172 2658 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:31:59.302736 kubelet[2658]: I0317 17:31:59.302372 2658 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:31:59.302736 kubelet[2658]: I0317 17:31:59.302432 2658 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:31:59.302736 kubelet[2658]: I0317 17:31:59.302595 2658 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:31:59.303149 kubelet[2658]: I0317 17:31:59.303130 2658 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:31:59.303257 kubelet[2658]: I0317 17:31:59.303237 2658 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:31:59.303386 kubelet[2658]: I0317 17:31:59.303371 2658 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:31:59.312829 kubelet[2658]: I0317 17:31:59.312590 2658 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:31:59.312829 kubelet[2658]: I0317 17:31:59.312613 2658 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:31:59.312829 kubelet[2658]: I0317 17:31:59.312672 2658 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:31:59.330650 kubelet[2658]: I0317 17:31:59.330607 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:31:59.332186 kubelet[2658]: I0317 17:31:59.331573 2658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:31:59.332186 kubelet[2658]: I0317 17:31:59.331611 2658 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:31:59.332186 kubelet[2658]: I0317 17:31:59.331632 2658 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:31:59.332186 kubelet[2658]: E0317 17:31:59.331670 2658 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:31:59.358283 kubelet[2658]: I0317 17:31:59.358236 2658 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:31:59.358424 kubelet[2658]: I0317 17:31:59.358411 2658 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:31:59.358491 kubelet[2658]: I0317 17:31:59.358482 2658 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:31:59.358679 kubelet[2658]: I0317 17:31:59.358663 2658 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:31:59.358779 kubelet[2658]: I0317 17:31:59.358756 2658 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:31:59.358837 kubelet[2658]: I0317 17:31:59.358829 2658 policy_none.go:49] "None policy: Start" Mar 17 17:31:59.359523 kubelet[2658]: I0317 17:31:59.359501 2658 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:31:59.359587 kubelet[2658]: I0317 17:31:59.359528 2658 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:31:59.359674 kubelet[2658]: I0317 17:31:59.359656 2658 state_mem.go:75] "Updated machine memory state" Mar 17 17:31:59.363621 kubelet[2658]: I0317 17:31:59.363597 2658 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:31:59.363872 kubelet[2658]: I0317 17:31:59.363830 2658 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:31:59.363951 kubelet[2658]: I0317 17:31:59.363931 2658 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:31:59.405252 kubelet[2658]: I0317 17:31:59.405218 2658 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:31:59.412011 kubelet[2658]: I0317 17:31:59.411985 2658 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 17 17:31:59.412180 kubelet[2658]: I0317 17:31:59.412064 2658 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 17:31:59.432500 kubelet[2658]: I0317 17:31:59.432411 2658 topology_manager.go:215] "Topology Admit Handler" podUID="74535a6be397f1607fafe6f52c01e895" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 17:31:59.432626 kubelet[2658]: I0317 17:31:59.432575 2658 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 17:31:59.432626 kubelet[2658]: I0317 17:31:59.432614 2658 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 17:31:59.504508 kubelet[2658]: I0317 17:31:59.504461 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74535a6be397f1607fafe6f52c01e895-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"74535a6be397f1607fafe6f52c01e895\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:31:59.504508 kubelet[2658]: I0317 17:31:59.504501 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:31:59.504668 kubelet[2658]: I0317 17:31:59.504524 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:31:59.504668 kubelet[2658]: I0317 17:31:59.504541 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:31:59.504668 kubelet[2658]: I0317 17:31:59.504562 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:31:59.504668 kubelet[2658]: I0317 17:31:59.504577 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74535a6be397f1607fafe6f52c01e895-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"74535a6be397f1607fafe6f52c01e895\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:31:59.504668 kubelet[2658]: I0317 17:31:59.504595 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:31:59.504797 kubelet[2658]: I0317 17:31:59.504610 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:31:59.504797 kubelet[2658]: I0317 17:31:59.504625 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74535a6be397f1607fafe6f52c01e895-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"74535a6be397f1607fafe6f52c01e895\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:31:59.815923 sudo[2693]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:31:59.816211 sudo[2693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:32:00.257233 sudo[2693]: pam_unix(sudo:session): session closed for user root Mar 17 17:32:00.297827 kubelet[2658]: I0317 17:32:00.297786 2658 apiserver.go:52] "Watching apiserver" Mar 17 17:32:00.303949 kubelet[2658]: I0317 17:32:00.303913 2658 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:32:00.350154 kubelet[2658]: E0317 17:32:00.350118 2658 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:32:00.370083 kubelet[2658]: I0317 17:32:00.370024 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.370006471 podStartE2EDuration="1.370006471s" podCreationTimestamp="2025-03-17 17:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:32:00.3630359 +0000 UTC m=+1.130633502" watchObservedRunningTime="2025-03-17 17:32:00.370006471 +0000 UTC m=+1.137604113" Mar 17 17:32:00.370971 kubelet[2658]: I0317 17:32:00.370127 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.370121554 podStartE2EDuration="1.370121554s" podCreationTimestamp="2025-03-17 17:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:32:00.37011216 +0000 UTC m=+1.137709842" watchObservedRunningTime="2025-03-17 17:32:00.370121554 +0000 UTC m=+1.137719196" Mar 17 17:32:01.870856 sudo[1673]: pam_unix(sudo:session): session closed for user root Mar 17 17:32:01.871927 sshd[1672]: Connection closed by 10.0.0.1 port 48508 Mar 17 17:32:01.872373 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Mar 17 17:32:01.874932 systemd[1]: sshd@6-10.0.0.45:22-10.0.0.1:48508.service: Deactivated successfully. Mar 17 17:32:01.876773 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:32:01.877882 systemd[1]: session-7.scope: Consumed 7.507s CPU time, 287.4M memory peak. Mar 17 17:32:01.879517 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:32:01.880286 systemd-logind[1468]: Removed session 7. Mar 17 17:32:07.264290 kubelet[2658]: I0317 17:32:07.264126 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.264110958 podStartE2EDuration="8.264110958s" podCreationTimestamp="2025-03-17 17:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:32:00.390257266 +0000 UTC m=+1.157854908" watchObservedRunningTime="2025-03-17 17:32:07.264110958 +0000 UTC m=+8.031708600" Mar 17 17:32:14.191043 kubelet[2658]: I0317 17:32:14.190992 2658 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:32:14.196751 containerd[1483]: time="2025-03-17T17:32:14.196676544Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:32:14.198235 kubelet[2658]: I0317 17:32:14.197247 2658 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:32:15.184509 kubelet[2658]: I0317 17:32:15.183831 2658 topology_manager.go:215] "Topology Admit Handler" podUID="6c974b0b-c2b0-48ff-a11e-70fce7c28277" podNamespace="kube-system" podName="kube-proxy-dpxhx" Mar 17 17:32:15.192746 kubelet[2658]: I0317 17:32:15.192679 2658 topology_manager.go:215] "Topology Admit Handler" podUID="c33e36dc-9fca-4745-89ed-1c5048f53be4" podNamespace="kube-system" podName="cilium-vdlmk" Mar 17 17:32:15.205235 systemd[1]: Created slice kubepods-besteffort-pod6c974b0b_c2b0_48ff_a11e_70fce7c28277.slice - libcontainer container kubepods-besteffort-pod6c974b0b_c2b0_48ff_a11e_70fce7c28277.slice. Mar 17 17:32:15.214848 kubelet[2658]: I0317 17:32:15.214185 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-bpf-maps\") pod \"cilium-vdlmk\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " pod="kube-system/cilium-vdlmk" Mar 17 17:32:15.214848 kubelet[2658]: I0317 17:32:15.214228 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b28dt\" (UniqueName: \"kubernetes.io/projected/c33e36dc-9fca-4745-89ed-1c5048f53be4-kube-api-access-b28dt\") pod \"cilium-vdlmk\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " pod="kube-system/cilium-vdlmk" Mar 17 17:32:15.214848 kubelet[2658]: I0317 17:32:15.214249 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6c974b0b-c2b0-48ff-a11e-70fce7c28277-kube-proxy\") pod \"kube-proxy-dpxhx\" (UID: \"6c974b0b-c2b0-48ff-a11e-70fce7c28277\") " pod="kube-system/kube-proxy-dpxhx" Mar 17 17:32:15.214848 kubelet[2658]: I0317 17:32:15.214280 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-xtables-lock\") pod \"cilium-vdlmk\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " pod="kube-system/cilium-vdlmk" Mar 17 17:32:15.214848 kubelet[2658]: I0317 17:32:15.214294 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c33e36dc-9fca-4745-89ed-1c5048f53be4-cilium-config-path\") pod \"cilium-vdlmk\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " pod="kube-system/cilium-vdlmk" Mar 17 17:32:15.214848 kubelet[2658]: I0317 17:32:15.214309 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c33e36dc-9fca-4745-89ed-1c5048f53be4-hubble-tls\") pod \"cilium-vdlmk\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " pod="kube-system/cilium-vdlmk" Mar 17 17:32:15.215090 kubelet[2658]: I0317 17:32:15.214323 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c974b0b-c2b0-48ff-a11e-70fce7c28277-lib-modules\") pod \"kube-proxy-dpxhx\" (UID: \"6c974b0b-c2b0-48ff-a11e-70fce7c28277\") " pod="kube-system/kube-proxy-dpxhx" Mar 17 17:32:15.215090 kubelet[2658]: I0317 17:32:15.214336 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-cilium-run\") pod \"cilium-vdlmk\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " pod="kube-system/cilium-vdlmk" Mar 17 17:32:15.215090 kubelet[2658]: I0317 17:32:15.214349 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-hostproc\") pod \"cilium-vdlmk\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " pod="kube-system/cilium-vdlmk" Mar 17 17:32:15.215090 kubelet[2658]: I0317 17:32:15.214362 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-etc-cni-netd\") pod \"cilium-vdlmk\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " pod="kube-system/cilium-vdlmk" Mar 17 17:32:15.215090 kubelet[2658]: I0317 17:32:15.214376 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l67bm\" (UniqueName: \"kubernetes.io/projected/6c974b0b-c2b0-48ff-a11e-70fce7c28277-kube-api-access-l67bm\") pod \"kube-proxy-dpxhx\" (UID: \"6c974b0b-c2b0-48ff-a11e-70fce7c28277\") " pod="kube-system/kube-proxy-dpxhx" Mar 17 17:32:15.215090 kubelet[2658]: I0317 17:32:15.214392 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-cilium-cgroup\") pod \"cilium-vdlmk\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " pod="kube-system/cilium-vdlmk" Mar 17 17:32:15.215224 kubelet[2658]: I0317 17:32:15.214409 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c33e36dc-9fca-4745-89ed-1c5048f53be4-clustermesh-secrets\") pod \"cilium-vdlmk\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " pod="kube-system/cilium-vdlmk" Mar 17 17:32:15.215224 kubelet[2658]: I0317 17:32:15.214424 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-lib-modules\") pod \"cilium-vdlmk\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " pod="kube-system/cilium-vdlmk" Mar 17 17:32:15.215224 kubelet[2658]: I0317 17:32:15.214438 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c974b0b-c2b0-48ff-a11e-70fce7c28277-xtables-lock\") pod \"kube-proxy-dpxhx\" (UID: \"6c974b0b-c2b0-48ff-a11e-70fce7c28277\") " pod="kube-system/kube-proxy-dpxhx" Mar 17 17:32:15.215224 kubelet[2658]: I0317 17:32:15.214451 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-cni-path\") pod \"cilium-vdlmk\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " pod="kube-system/cilium-vdlmk" Mar 17 17:32:15.215224 kubelet[2658]: I0317 17:32:15.214472 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-host-proc-sys-net\") pod \"cilium-vdlmk\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " pod="kube-system/cilium-vdlmk" Mar 17 17:32:15.215224 kubelet[2658]: I0317 17:32:15.214488 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-host-proc-sys-kernel\") pod \"cilium-vdlmk\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " pod="kube-system/cilium-vdlmk" Mar 17 17:32:15.221135 systemd[1]: Created slice kubepods-burstable-podc33e36dc_9fca_4745_89ed_1c5048f53be4.slice - libcontainer container kubepods-burstable-podc33e36dc_9fca_4745_89ed_1c5048f53be4.slice. Mar 17 17:32:15.287761 kubelet[2658]: I0317 17:32:15.286845 2658 topology_manager.go:215] "Topology Admit Handler" podUID="b2d86a1a-9477-4724-8766-30af5d545a54" podNamespace="kube-system" podName="cilium-operator-599987898-9b6dc" Mar 17 17:32:15.297691 systemd[1]: Created slice kubepods-besteffort-podb2d86a1a_9477_4724_8766_30af5d545a54.slice - libcontainer container kubepods-besteffort-podb2d86a1a_9477_4724_8766_30af5d545a54.slice. Mar 17 17:32:15.315706 kubelet[2658]: I0317 17:32:15.315498 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2d86a1a-9477-4724-8766-30af5d545a54-cilium-config-path\") pod \"cilium-operator-599987898-9b6dc\" (UID: \"b2d86a1a-9477-4724-8766-30af5d545a54\") " pod="kube-system/cilium-operator-599987898-9b6dc" Mar 17 17:32:15.315860 kubelet[2658]: I0317 17:32:15.315842 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tkbj\" (UniqueName: \"kubernetes.io/projected/b2d86a1a-9477-4724-8766-30af5d545a54-kube-api-access-9tkbj\") pod \"cilium-operator-599987898-9b6dc\" (UID: \"b2d86a1a-9477-4724-8766-30af5d545a54\") " pod="kube-system/cilium-operator-599987898-9b6dc" Mar 17 17:32:15.523886 containerd[1483]: time="2025-03-17T17:32:15.523777945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dpxhx,Uid:6c974b0b-c2b0-48ff-a11e-70fce7c28277,Namespace:kube-system,Attempt:0,}" Mar 17 17:32:15.536897 containerd[1483]: time="2025-03-17T17:32:15.536428675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vdlmk,Uid:c33e36dc-9fca-4745-89ed-1c5048f53be4,Namespace:kube-system,Attempt:0,}" Mar 17 17:32:15.574762 containerd[1483]: time="2025-03-17T17:32:15.574634101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:32:15.574762 containerd[1483]: time="2025-03-17T17:32:15.574701981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:32:15.574921 containerd[1483]: time="2025-03-17T17:32:15.574716821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:32:15.574921 containerd[1483]: time="2025-03-17T17:32:15.574841699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:32:15.582883 containerd[1483]: time="2025-03-17T17:32:15.582774350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:32:15.582883 containerd[1483]: time="2025-03-17T17:32:15.582847270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:32:15.582883 containerd[1483]: time="2025-03-17T17:32:15.582858069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:32:15.583293 containerd[1483]: time="2025-03-17T17:32:15.583207106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:32:15.599931 systemd[1]: Started cri-containerd-6296ae78fc1e7319705547fd5fd3b357517566f17a1e572194d0e3c0e175ffcb.scope - libcontainer container 6296ae78fc1e7319705547fd5fd3b357517566f17a1e572194d0e3c0e175ffcb. Mar 17 17:32:15.600874 containerd[1483]: time="2025-03-17T17:32:15.600839152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9b6dc,Uid:b2d86a1a-9477-4724-8766-30af5d545a54,Namespace:kube-system,Attempt:0,}" Mar 17 17:32:15.603034 systemd[1]: Started cri-containerd-ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6.scope - libcontainer container ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6. Mar 17 17:32:15.630277 containerd[1483]: time="2025-03-17T17:32:15.630178976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:32:15.630494 containerd[1483]: time="2025-03-17T17:32:15.630246376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:32:15.630494 containerd[1483]: time="2025-03-17T17:32:15.630268336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:32:15.630494 containerd[1483]: time="2025-03-17T17:32:15.630351415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:32:15.633890 containerd[1483]: time="2025-03-17T17:32:15.633550707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vdlmk,Uid:c33e36dc-9fca-4745-89ed-1c5048f53be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6\"" Mar 17 17:32:15.639586 containerd[1483]: time="2025-03-17T17:32:15.639551855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dpxhx,Uid:6c974b0b-c2b0-48ff-a11e-70fce7c28277,Namespace:kube-system,Attempt:0,} returns sandbox id \"6296ae78fc1e7319705547fd5fd3b357517566f17a1e572194d0e3c0e175ffcb\"" Mar 17 17:32:15.655600 systemd[1]: Started cri-containerd-a7459a070d82efac7ef1d2ce611ae4b57bdf9725907ce3d0a83fd202bed60933.scope - libcontainer container a7459a070d82efac7ef1d2ce611ae4b57bdf9725907ce3d0a83fd202bed60933. Mar 17 17:32:15.656047 containerd[1483]: time="2025-03-17T17:32:15.655722873Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:32:15.656047 containerd[1483]: time="2025-03-17T17:32:15.656015111Z" level=info msg="CreateContainer within sandbox \"6296ae78fc1e7319705547fd5fd3b357517566f17a1e572194d0e3c0e175ffcb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:32:15.689760 containerd[1483]: time="2025-03-17T17:32:15.689596138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9b6dc,Uid:b2d86a1a-9477-4724-8766-30af5d545a54,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7459a070d82efac7ef1d2ce611ae4b57bdf9725907ce3d0a83fd202bed60933\"" Mar 17 17:32:15.693457 containerd[1483]: time="2025-03-17T17:32:15.693351025Z" level=info msg="CreateContainer within sandbox \"6296ae78fc1e7319705547fd5fd3b357517566f17a1e572194d0e3c0e175ffcb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d52d9e6fd32f2f2ca866fd1ad8eaea1414ec4e0cb26d54c6604da36938496e40\"" Mar 17 17:32:15.696952 containerd[1483]: time="2025-03-17T17:32:15.696830954Z" level=info msg="StartContainer for \"d52d9e6fd32f2f2ca866fd1ad8eaea1414ec4e0cb26d54c6604da36938496e40\"" Mar 17 17:32:15.728925 systemd[1]: Started cri-containerd-d52d9e6fd32f2f2ca866fd1ad8eaea1414ec4e0cb26d54c6604da36938496e40.scope - libcontainer container d52d9e6fd32f2f2ca866fd1ad8eaea1414ec4e0cb26d54c6604da36938496e40. Mar 17 17:32:15.753203 containerd[1483]: time="2025-03-17T17:32:15.753141943Z" level=info msg="StartContainer for \"d52d9e6fd32f2f2ca866fd1ad8eaea1414ec4e0cb26d54c6604da36938496e40\" returns successfully" Mar 17 17:32:17.122786 update_engine[1470]: I20250317 17:32:17.122344 1470 update_attempter.cc:509] Updating boot flags... Mar 17 17:32:17.140015 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3027) Mar 17 17:32:25.927290 systemd[1]: Started sshd@7-10.0.0.45:22-10.0.0.1:57520.service - OpenSSH per-connection server daemon (10.0.0.1:57520). Mar 17 17:32:25.972911 sshd[3035]: Accepted publickey for core from 10.0.0.1 port 57520 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:32:25.974236 sshd-session[3035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:32:25.978379 systemd-logind[1468]: New session 8 of user core. Mar 17 17:32:25.983897 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:32:26.110285 sshd[3037]: Connection closed by 10.0.0.1 port 57520 Mar 17 17:32:26.110629 sshd-session[3035]: pam_unix(sshd:session): session closed for user core Mar 17 17:32:26.114052 systemd[1]: sshd@7-10.0.0.45:22-10.0.0.1:57520.service: Deactivated successfully. Mar 17 17:32:26.118240 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:32:26.118332 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:32:26.119185 systemd-logind[1468]: Removed session 8. Mar 17 17:32:31.126474 systemd[1]: Started sshd@8-10.0.0.45:22-10.0.0.1:57534.service - OpenSSH per-connection server daemon (10.0.0.1:57534). Mar 17 17:32:31.207280 sshd[3051]: Accepted publickey for core from 10.0.0.1 port 57534 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:32:31.208630 sshd-session[3051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:32:31.212912 systemd-logind[1468]: New session 9 of user core. Mar 17 17:32:31.223975 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:32:31.350456 sshd[3053]: Connection closed by 10.0.0.1 port 57534 Mar 17 17:32:31.351247 sshd-session[3051]: pam_unix(sshd:session): session closed for user core Mar 17 17:32:31.354073 systemd[1]: sshd@8-10.0.0.45:22-10.0.0.1:57534.service: Deactivated successfully. Mar 17 17:32:31.355868 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:32:31.357306 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:32:31.358245 systemd-logind[1468]: Removed session 9. Mar 17 17:32:36.364242 systemd[1]: Started sshd@9-10.0.0.45:22-10.0.0.1:34848.service - OpenSSH per-connection server daemon (10.0.0.1:34848). Mar 17 17:32:36.429726 sshd[3067]: Accepted publickey for core from 10.0.0.1 port 34848 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:32:36.431784 sshd-session[3067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:32:36.435822 systemd-logind[1468]: New session 10 of user core. Mar 17 17:32:36.444954 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:32:36.573777 sshd[3069]: Connection closed by 10.0.0.1 port 34848 Mar 17 17:32:36.574479 sshd-session[3067]: pam_unix(sshd:session): session closed for user core Mar 17 17:32:36.577978 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:32:36.578240 systemd[1]: sshd@9-10.0.0.45:22-10.0.0.1:34848.service: Deactivated successfully. Mar 17 17:32:36.580122 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:32:36.581153 systemd-logind[1468]: Removed session 10. Mar 17 17:32:41.598182 systemd[1]: Started sshd@10-10.0.0.45:22-10.0.0.1:34862.service - OpenSSH per-connection server daemon (10.0.0.1:34862). Mar 17 17:32:41.641404 sshd[3084]: Accepted publickey for core from 10.0.0.1 port 34862 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:32:41.643488 sshd-session[3084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:32:41.649196 systemd-logind[1468]: New session 11 of user core. Mar 17 17:32:41.664984 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:32:41.812607 sshd[3086]: Connection closed by 10.0.0.1 port 34862 Mar 17 17:32:41.811680 sshd-session[3084]: pam_unix(sshd:session): session closed for user core Mar 17 17:32:41.814329 systemd[1]: sshd@10-10.0.0.45:22-10.0.0.1:34862.service: Deactivated successfully. Mar 17 17:32:41.816353 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:32:41.818390 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:32:41.819317 systemd-logind[1468]: Removed session 11. Mar 17 17:32:44.027949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121849046.mount: Deactivated successfully. Mar 17 17:32:46.829527 systemd[1]: Started sshd@11-10.0.0.45:22-10.0.0.1:47362.service - OpenSSH per-connection server daemon (10.0.0.1:47362). Mar 17 17:32:46.889915 sshd[3126]: Accepted publickey for core from 10.0.0.1 port 47362 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:32:46.891636 sshd-session[3126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:32:46.899052 systemd-logind[1468]: New session 12 of user core. Mar 17 17:32:46.910008 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:32:47.038485 sshd[3128]: Connection closed by 10.0.0.1 port 47362 Mar 17 17:32:47.038837 sshd-session[3126]: pam_unix(sshd:session): session closed for user core Mar 17 17:32:47.042329 systemd[1]: sshd@11-10.0.0.45:22-10.0.0.1:47362.service: Deactivated successfully. Mar 17 17:32:47.044291 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:32:47.044992 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:32:47.046056 systemd-logind[1468]: Removed session 12. Mar 17 17:32:51.849409 containerd[1483]: time="2025-03-17T17:32:51.849351548Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:32:51.850758 containerd[1483]: time="2025-03-17T17:32:51.850714345Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:32:51.851688 containerd[1483]: time="2025-03-17T17:32:51.851661743Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:32:51.853645 containerd[1483]: time="2025-03-17T17:32:51.853469458Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 36.197454267s" Mar 17 17:32:51.853645 containerd[1483]: time="2025-03-17T17:32:51.853503178Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:32:51.856132 containerd[1483]: time="2025-03-17T17:32:51.856105732Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:32:51.857943 containerd[1483]: time="2025-03-17T17:32:51.857899927Z" level=info msg="CreateContainer within sandbox \"ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:32:51.891032 containerd[1483]: time="2025-03-17T17:32:51.890977206Z" level=info msg="CreateContainer within sandbox \"ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7\"" Mar 17 17:32:51.892325 containerd[1483]: time="2025-03-17T17:32:51.892286523Z" level=info msg="StartContainer for \"ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7\"" Mar 17 17:32:51.922915 systemd[1]: Started cri-containerd-ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7.scope - libcontainer container ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7. Mar 17 17:32:51.946633 containerd[1483]: time="2025-03-17T17:32:51.946589950Z" level=info msg="StartContainer for \"ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7\" returns successfully" Mar 17 17:32:51.997589 systemd[1]: cri-containerd-ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7.scope: Deactivated successfully. Mar 17 17:32:52.068280 systemd[1]: Started sshd@12-10.0.0.45:22-10.0.0.1:47372.service - OpenSSH per-connection server daemon (10.0.0.1:47372). Mar 17 17:32:52.109318 containerd[1483]: time="2025-03-17T17:32:52.103546010Z" level=info msg="shim disconnected" id=ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7 namespace=k8s.io Mar 17 17:32:52.109318 containerd[1483]: time="2025-03-17T17:32:52.109240516Z" level=warning msg="cleaning up after shim disconnected" id=ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7 namespace=k8s.io Mar 17 17:32:52.109318 containerd[1483]: time="2025-03-17T17:32:52.109254116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:32:52.158397 sshd[3199]: Accepted publickey for core from 10.0.0.1 port 47372 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:32:52.159887 sshd-session[3199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:32:52.163802 systemd-logind[1468]: New session 13 of user core. Mar 17 17:32:52.173019 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:32:52.296713 sshd[3214]: Connection closed by 10.0.0.1 port 47372 Mar 17 17:32:52.297110 sshd-session[3199]: pam_unix(sshd:session): session closed for user core Mar 17 17:32:52.299852 systemd[1]: sshd@12-10.0.0.45:22-10.0.0.1:47372.service: Deactivated successfully. Mar 17 17:32:52.303243 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:32:52.305275 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:32:52.306384 systemd-logind[1468]: Removed session 13. Mar 17 17:32:52.463163 containerd[1483]: time="2025-03-17T17:32:52.462898864Z" level=info msg="CreateContainer within sandbox \"ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:32:52.482546 kubelet[2658]: I0317 17:32:52.481988 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dpxhx" podStartSLOduration=37.481972618 podStartE2EDuration="37.481972618s" podCreationTimestamp="2025-03-17 17:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:32:16.41322716 +0000 UTC m=+17.180824802" watchObservedRunningTime="2025-03-17 17:32:52.481972618 +0000 UTC m=+53.249570260" Mar 17 17:32:52.484642 containerd[1483]: time="2025-03-17T17:32:52.484514452Z" level=info msg="CreateContainer within sandbox \"ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80\"" Mar 17 17:32:52.485035 containerd[1483]: time="2025-03-17T17:32:52.485006371Z" level=info msg="StartContainer for \"91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80\"" Mar 17 17:32:52.517942 systemd[1]: Started cri-containerd-91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80.scope - libcontainer container 91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80. Mar 17 17:32:52.553410 containerd[1483]: time="2025-03-17T17:32:52.553310766Z" level=info msg="StartContainer for \"91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80\" returns successfully" Mar 17 17:32:52.563030 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:32:52.563509 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:32:52.563815 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:32:52.574144 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:32:52.574672 systemd[1]: cri-containerd-91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80.scope: Deactivated successfully. Mar 17 17:32:52.586885 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:32:52.594493 containerd[1483]: time="2025-03-17T17:32:52.594429227Z" level=info msg="shim disconnected" id=91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80 namespace=k8s.io Mar 17 17:32:52.594493 containerd[1483]: time="2025-03-17T17:32:52.594483427Z" level=warning msg="cleaning up after shim disconnected" id=91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80 namespace=k8s.io Mar 17 17:32:52.594493 containerd[1483]: time="2025-03-17T17:32:52.594492547Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:32:52.884204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7-rootfs.mount: Deactivated successfully. Mar 17 17:32:53.042022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3034486118.mount: Deactivated successfully. Mar 17 17:32:53.331859 containerd[1483]: time="2025-03-17T17:32:53.331813064Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:32:53.334631 containerd[1483]: time="2025-03-17T17:32:53.334579498Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:32:53.335832 containerd[1483]: time="2025-03-17T17:32:53.335799415Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:32:53.337393 containerd[1483]: time="2025-03-17T17:32:53.337355931Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.481217679s" Mar 17 17:32:53.337431 containerd[1483]: time="2025-03-17T17:32:53.337395171Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:32:53.340468 containerd[1483]: time="2025-03-17T17:32:53.340414284Z" level=info msg="CreateContainer within sandbox \"a7459a070d82efac7ef1d2ce611ae4b57bdf9725907ce3d0a83fd202bed60933\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:32:53.368191 containerd[1483]: time="2025-03-17T17:32:53.368139938Z" level=info msg="CreateContainer within sandbox \"a7459a070d82efac7ef1d2ce611ae4b57bdf9725907ce3d0a83fd202bed60933\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348\"" Mar 17 17:32:53.368804 containerd[1483]: time="2025-03-17T17:32:53.368570337Z" level=info msg="StartContainer for \"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348\"" Mar 17 17:32:53.392911 systemd[1]: Started cri-containerd-2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348.scope - libcontainer container 2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348. Mar 17 17:32:53.419603 containerd[1483]: time="2025-03-17T17:32:53.419459857Z" level=info msg="StartContainer for \"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348\" returns successfully" Mar 17 17:32:53.472047 containerd[1483]: time="2025-03-17T17:32:53.472005172Z" level=info msg="CreateContainer within sandbox \"ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:32:53.496939 containerd[1483]: time="2025-03-17T17:32:53.496886233Z" level=info msg="CreateContainer within sandbox \"ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741\"" Mar 17 17:32:53.500942 containerd[1483]: time="2025-03-17T17:32:53.500892104Z" level=info msg="StartContainer for \"707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741\"" Mar 17 17:32:53.532970 systemd[1]: Started cri-containerd-707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741.scope - libcontainer container 707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741. Mar 17 17:32:53.589088 containerd[1483]: time="2025-03-17T17:32:53.588462856Z" level=info msg="StartContainer for \"707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741\" returns successfully" Mar 17 17:32:53.625887 systemd[1]: cri-containerd-707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741.scope: Deactivated successfully. Mar 17 17:32:53.770618 containerd[1483]: time="2025-03-17T17:32:53.770549585Z" level=info msg="shim disconnected" id=707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741 namespace=k8s.io Mar 17 17:32:53.770618 containerd[1483]: time="2025-03-17T17:32:53.770600985Z" level=warning msg="cleaning up after shim disconnected" id=707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741 namespace=k8s.io Mar 17 17:32:53.770618 containerd[1483]: time="2025-03-17T17:32:53.770608865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:32:54.478324 containerd[1483]: time="2025-03-17T17:32:54.478282047Z" level=info msg="CreateContainer within sandbox \"ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:32:54.502886 kubelet[2658]: I0317 17:32:54.501665 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-9b6dc" podStartSLOduration=1.854992104 podStartE2EDuration="39.501648352s" podCreationTimestamp="2025-03-17 17:32:15 +0000 UTC" firstStartedPulling="2025-03-17 17:32:15.691550721 +0000 UTC m=+16.459148363" lastFinishedPulling="2025-03-17 17:32:53.338206969 +0000 UTC m=+54.105804611" observedRunningTime="2025-03-17 17:32:53.508747405 +0000 UTC m=+54.276345087" watchObservedRunningTime="2025-03-17 17:32:54.501648352 +0000 UTC m=+55.269245994" Mar 17 17:32:54.505842 containerd[1483]: time="2025-03-17T17:32:54.505791503Z" level=info msg="CreateContainer within sandbox \"ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3\"" Mar 17 17:32:54.507481 containerd[1483]: time="2025-03-17T17:32:54.506585661Z" level=info msg="StartContainer for \"270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3\"" Mar 17 17:32:54.531950 systemd[1]: Started cri-containerd-270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3.scope - libcontainer container 270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3. Mar 17 17:32:54.556098 systemd[1]: cri-containerd-270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3.scope: Deactivated successfully. Mar 17 17:32:54.565918 containerd[1483]: time="2025-03-17T17:32:54.565821203Z" level=info msg="StartContainer for \"270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3\" returns successfully" Mar 17 17:32:54.591178 containerd[1483]: time="2025-03-17T17:32:54.590997944Z" level=info msg="shim disconnected" id=270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3 namespace=k8s.io Mar 17 17:32:54.591178 containerd[1483]: time="2025-03-17T17:32:54.591076704Z" level=warning msg="cleaning up after shim disconnected" id=270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3 namespace=k8s.io Mar 17 17:32:54.591178 containerd[1483]: time="2025-03-17T17:32:54.591085104Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:32:54.884212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3-rootfs.mount: Deactivated successfully. Mar 17 17:32:55.483472 containerd[1483]: time="2025-03-17T17:32:55.483429320Z" level=info msg="CreateContainer within sandbox \"ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:32:55.508608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount847337397.mount: Deactivated successfully. Mar 17 17:32:55.509897 containerd[1483]: time="2025-03-17T17:32:55.509860580Z" level=info msg="CreateContainer within sandbox \"ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946\"" Mar 17 17:32:55.511780 containerd[1483]: time="2025-03-17T17:32:55.510326139Z" level=info msg="StartContainer for \"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946\"" Mar 17 17:32:55.544935 systemd[1]: Started cri-containerd-894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946.scope - libcontainer container 894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946. Mar 17 17:32:55.589286 containerd[1483]: time="2025-03-17T17:32:55.589239957Z" level=info msg="StartContainer for \"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946\" returns successfully" Mar 17 17:32:55.730164 kubelet[2658]: I0317 17:32:55.729593 2658 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:32:55.753875 kubelet[2658]: I0317 17:32:55.753751 2658 topology_manager.go:215] "Topology Admit Handler" podUID="84634600-f499-410b-a2d3-22835e7bced7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c4nxw" Mar 17 17:32:55.753981 kubelet[2658]: I0317 17:32:55.753918 2658 topology_manager.go:215] "Topology Admit Handler" podUID="baa91bd7-5796-4a81-86dd-0b590ce48951" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nwg6r" Mar 17 17:32:55.763670 systemd[1]: Created slice kubepods-burstable-podbaa91bd7_5796_4a81_86dd_0b590ce48951.slice - libcontainer container kubepods-burstable-podbaa91bd7_5796_4a81_86dd_0b590ce48951.slice. Mar 17 17:32:55.768429 systemd[1]: Created slice kubepods-burstable-pod84634600_f499_410b_a2d3_22835e7bced7.slice - libcontainer container kubepods-burstable-pod84634600_f499_410b_a2d3_22835e7bced7.slice. Mar 17 17:32:55.896140 kubelet[2658]: I0317 17:32:55.896045 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hvhj\" (UniqueName: \"kubernetes.io/projected/baa91bd7-5796-4a81-86dd-0b590ce48951-kube-api-access-5hvhj\") pod \"coredns-7db6d8ff4d-nwg6r\" (UID: \"baa91bd7-5796-4a81-86dd-0b590ce48951\") " pod="kube-system/coredns-7db6d8ff4d-nwg6r" Mar 17 17:32:55.896266 kubelet[2658]: I0317 17:32:55.896152 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/baa91bd7-5796-4a81-86dd-0b590ce48951-config-volume\") pod \"coredns-7db6d8ff4d-nwg6r\" (UID: \"baa91bd7-5796-4a81-86dd-0b590ce48951\") " pod="kube-system/coredns-7db6d8ff4d-nwg6r" Mar 17 17:32:55.896266 kubelet[2658]: I0317 17:32:55.896214 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84634600-f499-410b-a2d3-22835e7bced7-config-volume\") pod \"coredns-7db6d8ff4d-c4nxw\" (UID: \"84634600-f499-410b-a2d3-22835e7bced7\") " pod="kube-system/coredns-7db6d8ff4d-c4nxw" Mar 17 17:32:55.896266 kubelet[2658]: I0317 17:32:55.896237 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvml4\" (UniqueName: \"kubernetes.io/projected/84634600-f499-410b-a2d3-22835e7bced7-kube-api-access-xvml4\") pod \"coredns-7db6d8ff4d-c4nxw\" (UID: \"84634600-f499-410b-a2d3-22835e7bced7\") " pod="kube-system/coredns-7db6d8ff4d-c4nxw" Mar 17 17:32:56.069621 containerd[1483]: time="2025-03-17T17:32:56.067060343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nwg6r,Uid:baa91bd7-5796-4a81-86dd-0b590ce48951,Namespace:kube-system,Attempt:0,}" Mar 17 17:32:56.072881 containerd[1483]: time="2025-03-17T17:32:56.072836289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c4nxw,Uid:84634600-f499-410b-a2d3-22835e7bced7,Namespace:kube-system,Attempt:0,}" Mar 17 17:32:56.499417 kubelet[2658]: I0317 17:32:56.499257 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vdlmk" podStartSLOduration=5.289513469 podStartE2EDuration="41.499241644s" podCreationTimestamp="2025-03-17 17:32:15 +0000 UTC" firstStartedPulling="2025-03-17 17:32:15.646185037 +0000 UTC m=+16.413782679" lastFinishedPulling="2025-03-17 17:32:51.855913252 +0000 UTC m=+52.623510854" observedRunningTime="2025-03-17 17:32:56.498478246 +0000 UTC m=+57.266075888" watchObservedRunningTime="2025-03-17 17:32:56.499241644 +0000 UTC m=+57.266839286" Mar 17 17:32:57.312188 systemd[1]: Started sshd@13-10.0.0.45:22-10.0.0.1:35196.service - OpenSSH per-connection server daemon (10.0.0.1:35196). Mar 17 17:32:57.359703 sshd[3605]: Accepted publickey for core from 10.0.0.1 port 35196 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:32:57.361145 sshd-session[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:32:57.365123 systemd-logind[1468]: New session 14 of user core. Mar 17 17:32:57.375924 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:32:57.500640 sshd[3607]: Connection closed by 10.0.0.1 port 35196 Mar 17 17:32:57.500984 sshd-session[3605]: pam_unix(sshd:session): session closed for user core Mar 17 17:32:57.504284 systemd[1]: sshd@13-10.0.0.45:22-10.0.0.1:35196.service: Deactivated successfully. Mar 17 17:32:57.506297 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:32:57.507978 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:32:57.509009 systemd-logind[1468]: Removed session 14. Mar 17 17:32:57.862501 systemd-networkd[1411]: cilium_host: Link UP Mar 17 17:32:57.862606 systemd-networkd[1411]: cilium_net: Link UP Mar 17 17:32:57.862748 systemd-networkd[1411]: cilium_net: Gained carrier Mar 17 17:32:57.862861 systemd-networkd[1411]: cilium_host: Gained carrier Mar 17 17:32:57.937482 systemd-networkd[1411]: cilium_vxlan: Link UP Mar 17 17:32:57.937756 systemd-networkd[1411]: cilium_vxlan: Gained carrier Mar 17 17:32:58.263858 systemd-networkd[1411]: cilium_net: Gained IPv6LL Mar 17 17:32:58.270501 kernel: NET: Registered PF_ALG protocol family Mar 17 17:32:58.639897 systemd-networkd[1411]: cilium_host: Gained IPv6LL Mar 17 17:32:58.878677 systemd-networkd[1411]: lxc_health: Link UP Mar 17 17:32:58.880159 systemd-networkd[1411]: lxc_health: Gained carrier Mar 17 17:32:59.248989 kernel: eth0: renamed from tmpd73cc Mar 17 17:32:59.253765 kernel: eth0: renamed from tmp1eab5 Mar 17 17:32:59.260137 systemd-networkd[1411]: lxc0c4b221bebff: Link UP Mar 17 17:32:59.263797 systemd-networkd[1411]: lxc6a01fe872877: Link UP Mar 17 17:32:59.264332 systemd-networkd[1411]: lxc6a01fe872877: Gained carrier Mar 17 17:32:59.264449 systemd-networkd[1411]: lxc0c4b221bebff: Gained carrier Mar 17 17:32:59.535884 systemd-networkd[1411]: cilium_vxlan: Gained IPv6LL Mar 17 17:33:00.367922 systemd-networkd[1411]: lxc6a01fe872877: Gained IPv6LL Mar 17 17:33:00.687899 systemd-networkd[1411]: lxc_health: Gained IPv6LL Mar 17 17:33:01.135934 systemd-networkd[1411]: lxc0c4b221bebff: Gained IPv6LL Mar 17 17:33:02.521289 systemd[1]: Started sshd@14-10.0.0.45:22-10.0.0.1:34426.service - OpenSSH per-connection server daemon (10.0.0.1:34426). Mar 17 17:33:02.571228 sshd[4005]: Accepted publickey for core from 10.0.0.1 port 34426 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:33:02.572880 sshd-session[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:02.579527 systemd-logind[1468]: New session 15 of user core. Mar 17 17:33:02.582924 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:33:02.721738 sshd[4007]: Connection closed by 10.0.0.1 port 34426 Mar 17 17:33:02.722311 sshd-session[4005]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:02.726400 systemd[1]: sshd@14-10.0.0.45:22-10.0.0.1:34426.service: Deactivated successfully. Mar 17 17:33:02.728647 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:33:02.730354 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:33:02.731461 systemd-logind[1468]: Removed session 15. Mar 17 17:33:02.928440 containerd[1483]: time="2025-03-17T17:33:02.928323587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:33:02.928440 containerd[1483]: time="2025-03-17T17:33:02.928408347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:33:02.929030 containerd[1483]: time="2025-03-17T17:33:02.928420547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:02.929598 containerd[1483]: time="2025-03-17T17:33:02.929476465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:02.942165 containerd[1483]: time="2025-03-17T17:33:02.942005198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:33:02.942165 containerd[1483]: time="2025-03-17T17:33:02.942085438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:33:02.943524 containerd[1483]: time="2025-03-17T17:33:02.942104078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:02.943524 containerd[1483]: time="2025-03-17T17:33:02.942217758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:33:02.946917 systemd[1]: Started cri-containerd-d73cc916a2fd902ece453417e7fe48f8051d622db31636c0e9122858cb761d80.scope - libcontainer container d73cc916a2fd902ece453417e7fe48f8051d622db31636c0e9122858cb761d80. Mar 17 17:33:02.976933 systemd[1]: Started cri-containerd-1eab59c1fb74db94e7bb00e12ded819c7461b613f776c993d7ce5364e766de5f.scope - libcontainer container 1eab59c1fb74db94e7bb00e12ded819c7461b613f776c993d7ce5364e766de5f. Mar 17 17:33:02.980652 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:33:02.988639 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:33:03.000082 containerd[1483]: time="2025-03-17T17:33:03.000044596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nwg6r,Uid:baa91bd7-5796-4a81-86dd-0b590ce48951,Namespace:kube-system,Attempt:0,} returns sandbox id \"d73cc916a2fd902ece453417e7fe48f8051d622db31636c0e9122858cb761d80\"" Mar 17 17:33:03.004990 containerd[1483]: time="2025-03-17T17:33:03.004882186Z" level=info msg="CreateContainer within sandbox \"d73cc916a2fd902ece453417e7fe48f8051d622db31636c0e9122858cb761d80\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:33:03.017531 containerd[1483]: time="2025-03-17T17:33:03.017481320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c4nxw,Uid:84634600-f499-410b-a2d3-22835e7bced7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1eab59c1fb74db94e7bb00e12ded819c7461b613f776c993d7ce5364e766de5f\"" Mar 17 17:33:03.018695 containerd[1483]: time="2025-03-17T17:33:03.018654877Z" level=info msg="CreateContainer within sandbox \"d73cc916a2fd902ece453417e7fe48f8051d622db31636c0e9122858cb761d80\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69f0179773ceb4a11fb944b2a8881f2218c37fa8ac328e62d64791a501d866dd\"" Mar 17 17:33:03.020800 containerd[1483]: time="2025-03-17T17:33:03.019961635Z" level=info msg="StartContainer for \"69f0179773ceb4a11fb944b2a8881f2218c37fa8ac328e62d64791a501d866dd\"" Mar 17 17:33:03.020800 containerd[1483]: time="2025-03-17T17:33:03.020771033Z" level=info msg="CreateContainer within sandbox \"1eab59c1fb74db94e7bb00e12ded819c7461b613f776c993d7ce5364e766de5f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:33:03.034875 containerd[1483]: time="2025-03-17T17:33:03.034830364Z" level=info msg="CreateContainer within sandbox \"1eab59c1fb74db94e7bb00e12ded819c7461b613f776c993d7ce5364e766de5f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20d1a569c09fa66c8f776210aedde2c7ca31acb34b21b49f4cd02410bababca4\"" Mar 17 17:33:03.036347 containerd[1483]: time="2025-03-17T17:33:03.036320601Z" level=info msg="StartContainer for \"20d1a569c09fa66c8f776210aedde2c7ca31acb34b21b49f4cd02410bababca4\"" Mar 17 17:33:03.046884 systemd[1]: Started cri-containerd-69f0179773ceb4a11fb944b2a8881f2218c37fa8ac328e62d64791a501d866dd.scope - libcontainer container 69f0179773ceb4a11fb944b2a8881f2218c37fa8ac328e62d64791a501d866dd. Mar 17 17:33:03.067912 systemd[1]: Started cri-containerd-20d1a569c09fa66c8f776210aedde2c7ca31acb34b21b49f4cd02410bababca4.scope - libcontainer container 20d1a569c09fa66c8f776210aedde2c7ca31acb34b21b49f4cd02410bababca4. Mar 17 17:33:03.088484 containerd[1483]: time="2025-03-17T17:33:03.088445452Z" level=info msg="StartContainer for \"69f0179773ceb4a11fb944b2a8881f2218c37fa8ac328e62d64791a501d866dd\" returns successfully" Mar 17 17:33:03.095325 containerd[1483]: time="2025-03-17T17:33:03.095272838Z" level=info msg="StartContainer for \"20d1a569c09fa66c8f776210aedde2c7ca31acb34b21b49f4cd02410bababca4\" returns successfully" Mar 17 17:33:03.512375 kubelet[2658]: I0317 17:33:03.512316 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-c4nxw" podStartSLOduration=48.512299889 podStartE2EDuration="48.512299889s" podCreationTimestamp="2025-03-17 17:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:33:03.512131809 +0000 UTC m=+64.279729451" watchObservedRunningTime="2025-03-17 17:33:03.512299889 +0000 UTC m=+64.279897531" Mar 17 17:33:03.522820 kubelet[2658]: I0317 17:33:03.522425 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nwg6r" podStartSLOduration=48.522407588 podStartE2EDuration="48.522407588s" podCreationTimestamp="2025-03-17 17:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:33:03.522159868 +0000 UTC m=+64.289757510" watchObservedRunningTime="2025-03-17 17:33:03.522407588 +0000 UTC m=+64.290005230" Mar 17 17:33:03.937650 systemd[1]: run-containerd-runc-k8s.io-1eab59c1fb74db94e7bb00e12ded819c7461b613f776c993d7ce5364e766de5f-runc.UYnkxg.mount: Deactivated successfully. Mar 17 17:33:07.740474 systemd[1]: Started sshd@15-10.0.0.45:22-10.0.0.1:34434.service - OpenSSH per-connection server daemon (10.0.0.1:34434). Mar 17 17:33:07.799085 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 34434 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:33:07.800805 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:07.806657 systemd-logind[1468]: New session 16 of user core. Mar 17 17:33:07.823116 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:33:07.945333 sshd[4196]: Connection closed by 10.0.0.1 port 34434 Mar 17 17:33:07.945750 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:07.953018 systemd[1]: sshd@15-10.0.0.45:22-10.0.0.1:34434.service: Deactivated successfully. Mar 17 17:33:07.954909 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:33:07.957585 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:33:07.958965 systemd-logind[1468]: Removed session 16. Mar 17 17:33:12.957046 systemd[1]: Started sshd@16-10.0.0.45:22-10.0.0.1:49936.service - OpenSSH per-connection server daemon (10.0.0.1:49936). Mar 17 17:33:12.998206 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 49936 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:33:12.999340 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:13.004592 systemd-logind[1468]: New session 17 of user core. Mar 17 17:33:13.018872 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:33:13.125857 sshd[4215]: Connection closed by 10.0.0.1 port 49936 Mar 17 17:33:13.126448 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:13.129537 systemd[1]: sshd@16-10.0.0.45:22-10.0.0.1:49936.service: Deactivated successfully. Mar 17 17:33:13.131238 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:33:13.132533 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:33:13.134039 systemd-logind[1468]: Removed session 17. Mar 17 17:33:18.141070 systemd[1]: Started sshd@17-10.0.0.45:22-10.0.0.1:49940.service - OpenSSH per-connection server daemon (10.0.0.1:49940). Mar 17 17:33:18.184259 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 49940 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:33:18.184708 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:18.188795 systemd-logind[1468]: New session 18 of user core. Mar 17 17:33:18.198887 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:33:18.309384 sshd[4234]: Connection closed by 10.0.0.1 port 49940 Mar 17 17:33:18.309711 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:18.312826 systemd[1]: sshd@17-10.0.0.45:22-10.0.0.1:49940.service: Deactivated successfully. Mar 17 17:33:18.314911 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:33:18.315739 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:33:18.316574 systemd-logind[1468]: Removed session 18. Mar 17 17:33:23.321217 systemd[1]: Started sshd@18-10.0.0.45:22-10.0.0.1:58282.service - OpenSSH per-connection server daemon (10.0.0.1:58282). Mar 17 17:33:23.365320 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 58282 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:33:23.366385 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:23.370123 systemd-logind[1468]: New session 19 of user core. Mar 17 17:33:23.386979 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:33:23.497297 sshd[4250]: Connection closed by 10.0.0.1 port 58282 Mar 17 17:33:23.497650 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:23.501026 systemd[1]: sshd@18-10.0.0.45:22-10.0.0.1:58282.service: Deactivated successfully. Mar 17 17:33:23.503204 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:33:23.504097 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:33:23.504960 systemd-logind[1468]: Removed session 19. Mar 17 17:33:28.509601 systemd[1]: Started sshd@19-10.0.0.45:22-10.0.0.1:58292.service - OpenSSH per-connection server daemon (10.0.0.1:58292). Mar 17 17:33:28.551380 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 58292 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:33:28.552671 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:28.556576 systemd-logind[1468]: New session 20 of user core. Mar 17 17:33:28.562899 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:33:28.667358 sshd[4266]: Connection closed by 10.0.0.1 port 58292 Mar 17 17:33:28.667692 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:28.670927 systemd[1]: sshd@19-10.0.0.45:22-10.0.0.1:58292.service: Deactivated successfully. Mar 17 17:33:28.673525 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:33:28.674553 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:33:28.675402 systemd-logind[1468]: Removed session 20. Mar 17 17:33:33.685000 systemd[1]: Started sshd@20-10.0.0.45:22-10.0.0.1:53994.service - OpenSSH per-connection server daemon (10.0.0.1:53994). Mar 17 17:33:33.728490 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 53994 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:33:33.729969 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:33.734175 systemd-logind[1468]: New session 21 of user core. Mar 17 17:33:33.742950 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:33:33.850832 sshd[4282]: Connection closed by 10.0.0.1 port 53994 Mar 17 17:33:33.851207 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:33.854073 systemd[1]: sshd@20-10.0.0.45:22-10.0.0.1:53994.service: Deactivated successfully. Mar 17 17:33:33.855829 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:33:33.857220 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:33:33.858454 systemd-logind[1468]: Removed session 21. Mar 17 17:33:38.862411 systemd[1]: Started sshd@21-10.0.0.45:22-10.0.0.1:53998.service - OpenSSH per-connection server daemon (10.0.0.1:53998). Mar 17 17:33:38.904212 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 53998 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:33:38.905560 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:38.909797 systemd-logind[1468]: New session 22 of user core. Mar 17 17:33:38.915885 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:33:39.024510 sshd[4298]: Connection closed by 10.0.0.1 port 53998 Mar 17 17:33:39.024956 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:39.028502 systemd[1]: sshd@21-10.0.0.45:22-10.0.0.1:53998.service: Deactivated successfully. Mar 17 17:33:39.031437 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:33:39.032108 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:33:39.032912 systemd-logind[1468]: Removed session 22. Mar 17 17:33:44.036018 systemd[1]: Started sshd@22-10.0.0.45:22-10.0.0.1:37048.service - OpenSSH per-connection server daemon (10.0.0.1:37048). Mar 17 17:33:44.077453 sshd[4312]: Accepted publickey for core from 10.0.0.1 port 37048 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:33:44.078556 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:44.082343 systemd-logind[1468]: New session 23 of user core. Mar 17 17:33:44.092929 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:33:44.197413 sshd[4314]: Connection closed by 10.0.0.1 port 37048 Mar 17 17:33:44.197749 sshd-session[4312]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:44.201116 systemd[1]: sshd@22-10.0.0.45:22-10.0.0.1:37048.service: Deactivated successfully. Mar 17 17:33:44.202920 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:33:44.203539 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:33:44.204309 systemd-logind[1468]: Removed session 23. Mar 17 17:33:49.210104 systemd[1]: Started sshd@23-10.0.0.45:22-10.0.0.1:37052.service - OpenSSH per-connection server daemon (10.0.0.1:37052). Mar 17 17:33:49.251612 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 37052 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:33:49.253020 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:49.257279 systemd-logind[1468]: New session 24 of user core. Mar 17 17:33:49.266900 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:33:49.378283 sshd[4332]: Connection closed by 10.0.0.1 port 37052 Mar 17 17:33:49.379101 sshd-session[4330]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:49.382024 systemd[1]: sshd@23-10.0.0.45:22-10.0.0.1:37052.service: Deactivated successfully. Mar 17 17:33:49.384222 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:33:49.386085 systemd-logind[1468]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:33:49.386957 systemd-logind[1468]: Removed session 24. Mar 17 17:33:54.394209 systemd[1]: Started sshd@24-10.0.0.45:22-10.0.0.1:53780.service - OpenSSH per-connection server daemon (10.0.0.1:53780). Mar 17 17:33:54.438381 sshd[4347]: Accepted publickey for core from 10.0.0.1 port 53780 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:33:54.439407 sshd-session[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:54.442818 systemd-logind[1468]: New session 25 of user core. Mar 17 17:33:54.451879 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:33:54.558661 sshd[4349]: Connection closed by 10.0.0.1 port 53780 Mar 17 17:33:54.559014 sshd-session[4347]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:54.562791 systemd[1]: sshd@24-10.0.0.45:22-10.0.0.1:53780.service: Deactivated successfully. Mar 17 17:33:54.565111 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:33:54.565718 systemd-logind[1468]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:33:54.566524 systemd-logind[1468]: Removed session 25. Mar 17 17:33:59.574363 systemd[1]: Started sshd@25-10.0.0.45:22-10.0.0.1:53784.service - OpenSSH per-connection server daemon (10.0.0.1:53784). Mar 17 17:33:59.618264 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 53784 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:33:59.619747 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:33:59.624041 systemd-logind[1468]: New session 26 of user core. Mar 17 17:33:59.635948 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:33:59.752779 sshd[4369]: Connection closed by 10.0.0.1 port 53784 Mar 17 17:33:59.752821 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Mar 17 17:33:59.756310 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:33:59.757636 systemd[1]: sshd@25-10.0.0.45:22-10.0.0.1:53784.service: Deactivated successfully. Mar 17 17:33:59.759714 systemd-logind[1468]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:33:59.760606 systemd-logind[1468]: Removed session 26. Mar 17 17:34:04.766459 systemd[1]: Started sshd@26-10.0.0.45:22-10.0.0.1:35970.service - OpenSSH per-connection server daemon (10.0.0.1:35970). Mar 17 17:34:04.808816 sshd[4384]: Accepted publickey for core from 10.0.0.1 port 35970 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:34:04.810159 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:34:04.814791 systemd-logind[1468]: New session 27 of user core. Mar 17 17:34:04.831915 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:34:04.945348 sshd[4386]: Connection closed by 10.0.0.1 port 35970 Mar 17 17:34:04.945898 sshd-session[4384]: pam_unix(sshd:session): session closed for user core Mar 17 17:34:04.949416 systemd[1]: sshd@26-10.0.0.45:22-10.0.0.1:35970.service: Deactivated successfully. Mar 17 17:34:04.952400 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:34:04.953072 systemd-logind[1468]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:34:04.954085 systemd-logind[1468]: Removed session 27. Mar 17 17:34:09.959090 systemd[1]: Started sshd@27-10.0.0.45:22-10.0.0.1:35974.service - OpenSSH per-connection server daemon (10.0.0.1:35974). Mar 17 17:34:10.006013 sshd[4401]: Accepted publickey for core from 10.0.0.1 port 35974 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:34:10.007331 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:34:10.011854 systemd-logind[1468]: New session 28 of user core. Mar 17 17:34:10.017896 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 17:34:10.122485 sshd[4403]: Connection closed by 10.0.0.1 port 35974 Mar 17 17:34:10.122833 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Mar 17 17:34:10.125378 systemd[1]: sshd@27-10.0.0.45:22-10.0.0.1:35974.service: Deactivated successfully. Mar 17 17:34:10.128000 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 17:34:10.129571 systemd-logind[1468]: Session 28 logged out. Waiting for processes to exit. Mar 17 17:34:10.130464 systemd-logind[1468]: Removed session 28. Mar 17 17:34:10.337474 kubelet[2658]: E0317 17:34:10.333399 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:34:15.134667 systemd[1]: Started sshd@28-10.0.0.45:22-10.0.0.1:53464.service - OpenSSH per-connection server daemon (10.0.0.1:53464). Mar 17 17:34:15.179809 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 53464 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:34:15.181362 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:34:15.186920 systemd-logind[1468]: New session 29 of user core. Mar 17 17:34:15.200902 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 17:34:15.333625 sshd[4423]: Connection closed by 10.0.0.1 port 53464 Mar 17 17:34:15.332938 sshd-session[4420]: pam_unix(sshd:session): session closed for user core Mar 17 17:34:15.337988 systemd-logind[1468]: Session 29 logged out. Waiting for processes to exit. Mar 17 17:34:15.338544 systemd[1]: sshd@28-10.0.0.45:22-10.0.0.1:53464.service: Deactivated successfully. Mar 17 17:34:15.340458 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 17:34:15.341647 systemd-logind[1468]: Removed session 29. Mar 17 17:34:19.333648 kubelet[2658]: E0317 17:34:19.333565 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:34:20.345341 systemd[1]: Started sshd@29-10.0.0.45:22-10.0.0.1:53468.service - OpenSSH per-connection server daemon (10.0.0.1:53468). Mar 17 17:34:20.386257 sshd[4441]: Accepted publickey for core from 10.0.0.1 port 53468 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:34:20.387380 sshd-session[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:34:20.391176 systemd-logind[1468]: New session 30 of user core. Mar 17 17:34:20.400923 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 17 17:34:20.507628 sshd[4443]: Connection closed by 10.0.0.1 port 53468 Mar 17 17:34:20.508232 sshd-session[4441]: pam_unix(sshd:session): session closed for user core Mar 17 17:34:20.511338 systemd[1]: sshd@29-10.0.0.45:22-10.0.0.1:53468.service: Deactivated successfully. Mar 17 17:34:20.514261 systemd[1]: session-30.scope: Deactivated successfully. Mar 17 17:34:20.514898 systemd-logind[1468]: Session 30 logged out. Waiting for processes to exit. Mar 17 17:34:20.516063 systemd-logind[1468]: Removed session 30. Mar 17 17:34:25.522228 systemd[1]: Started sshd@30-10.0.0.45:22-10.0.0.1:36626.service - OpenSSH per-connection server daemon (10.0.0.1:36626). Mar 17 17:34:25.564113 sshd[4457]: Accepted publickey for core from 10.0.0.1 port 36626 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:34:25.565502 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:34:25.570139 systemd-logind[1468]: New session 31 of user core. Mar 17 17:34:25.579961 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 17 17:34:25.688846 sshd[4459]: Connection closed by 10.0.0.1 port 36626 Mar 17 17:34:25.689357 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Mar 17 17:34:25.692926 systemd[1]: sshd@30-10.0.0.45:22-10.0.0.1:36626.service: Deactivated successfully. Mar 17 17:34:25.694656 systemd[1]: session-31.scope: Deactivated successfully. Mar 17 17:34:25.696313 systemd-logind[1468]: Session 31 logged out. Waiting for processes to exit. Mar 17 17:34:25.697493 systemd-logind[1468]: Removed session 31. Mar 17 17:34:27.333311 kubelet[2658]: E0317 17:34:27.332417 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:34:29.332543 kubelet[2658]: E0317 17:34:29.332513 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:34:30.700128 systemd[1]: Started sshd@31-10.0.0.45:22-10.0.0.1:36636.service - OpenSSH per-connection server daemon (10.0.0.1:36636). Mar 17 17:34:30.741859 sshd[4474]: Accepted publickey for core from 10.0.0.1 port 36636 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:34:30.743128 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:34:30.747080 systemd-logind[1468]: New session 32 of user core. Mar 17 17:34:30.757894 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 17 17:34:30.864670 sshd[4476]: Connection closed by 10.0.0.1 port 36636 Mar 17 17:34:30.865253 sshd-session[4474]: pam_unix(sshd:session): session closed for user core Mar 17 17:34:30.868225 systemd-logind[1468]: Session 32 logged out. Waiting for processes to exit. Mar 17 17:34:30.868527 systemd[1]: sshd@31-10.0.0.45:22-10.0.0.1:36636.service: Deactivated successfully. Mar 17 17:34:30.870311 systemd[1]: session-32.scope: Deactivated successfully. Mar 17 17:34:30.872392 systemd-logind[1468]: Removed session 32. Mar 17 17:34:35.880617 systemd[1]: Started sshd@32-10.0.0.45:22-10.0.0.1:59362.service - OpenSSH per-connection server daemon (10.0.0.1:59362). Mar 17 17:34:35.928302 sshd[4490]: Accepted publickey for core from 10.0.0.1 port 59362 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:34:35.929689 sshd-session[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:34:35.934100 systemd-logind[1468]: New session 33 of user core. Mar 17 17:34:35.943949 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 17 17:34:36.062819 sshd[4492]: Connection closed by 10.0.0.1 port 59362 Mar 17 17:34:36.063403 sshd-session[4490]: pam_unix(sshd:session): session closed for user core Mar 17 17:34:36.066925 systemd[1]: sshd@32-10.0.0.45:22-10.0.0.1:59362.service: Deactivated successfully. Mar 17 17:34:36.069617 systemd[1]: session-33.scope: Deactivated successfully. Mar 17 17:34:36.070574 systemd-logind[1468]: Session 33 logged out. Waiting for processes to exit. Mar 17 17:34:36.071477 systemd-logind[1468]: Removed session 33. Mar 17 17:34:41.074580 systemd[1]: Started sshd@33-10.0.0.45:22-10.0.0.1:59374.service - OpenSSH per-connection server daemon (10.0.0.1:59374). Mar 17 17:34:41.121297 sshd[4506]: Accepted publickey for core from 10.0.0.1 port 59374 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:34:41.122919 sshd-session[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:34:41.127711 systemd-logind[1468]: New session 34 of user core. Mar 17 17:34:41.139966 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 17 17:34:41.266642 sshd[4508]: Connection closed by 10.0.0.1 port 59374 Mar 17 17:34:41.267103 sshd-session[4506]: pam_unix(sshd:session): session closed for user core Mar 17 17:34:41.270251 systemd[1]: sshd@33-10.0.0.45:22-10.0.0.1:59374.service: Deactivated successfully. Mar 17 17:34:41.271966 systemd[1]: session-34.scope: Deactivated successfully. Mar 17 17:34:41.273647 systemd-logind[1468]: Session 34 logged out. Waiting for processes to exit. Mar 17 17:34:41.275096 systemd-logind[1468]: Removed session 34. Mar 17 17:34:46.279650 systemd[1]: Started sshd@34-10.0.0.45:22-10.0.0.1:46250.service - OpenSSH per-connection server daemon (10.0.0.1:46250). Mar 17 17:34:46.322013 sshd[4524]: Accepted publickey for core from 10.0.0.1 port 46250 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:34:46.323251 sshd-session[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:34:46.327160 systemd-logind[1468]: New session 35 of user core. Mar 17 17:34:46.332911 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 17 17:34:46.442236 sshd[4526]: Connection closed by 10.0.0.1 port 46250 Mar 17 17:34:46.442242 sshd-session[4524]: pam_unix(sshd:session): session closed for user core Mar 17 17:34:46.445846 systemd[1]: sshd@34-10.0.0.45:22-10.0.0.1:46250.service: Deactivated successfully. Mar 17 17:34:46.447590 systemd[1]: session-35.scope: Deactivated successfully. Mar 17 17:34:46.450224 systemd-logind[1468]: Session 35 logged out. Waiting for processes to exit. Mar 17 17:34:46.451239 systemd-logind[1468]: Removed session 35. Mar 17 17:34:51.460291 systemd[1]: Started sshd@35-10.0.0.45:22-10.0.0.1:46254.service - OpenSSH per-connection server daemon (10.0.0.1:46254). Mar 17 17:34:51.503352 sshd[4541]: Accepted publickey for core from 10.0.0.1 port 46254 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:34:51.504680 sshd-session[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:34:51.509138 systemd-logind[1468]: New session 36 of user core. Mar 17 17:34:51.522977 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 17 17:34:51.635358 sshd[4543]: Connection closed by 10.0.0.1 port 46254 Mar 17 17:34:51.635741 sshd-session[4541]: pam_unix(sshd:session): session closed for user core Mar 17 17:34:51.639809 systemd[1]: sshd@35-10.0.0.45:22-10.0.0.1:46254.service: Deactivated successfully. Mar 17 17:34:51.642588 systemd[1]: session-36.scope: Deactivated successfully. Mar 17 17:34:51.643233 systemd-logind[1468]: Session 36 logged out. Waiting for processes to exit. Mar 17 17:34:51.644102 systemd-logind[1468]: Removed session 36. Mar 17 17:34:52.332460 kubelet[2658]: E0317 17:34:52.332416 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:34:56.647217 systemd[1]: Started sshd@36-10.0.0.45:22-10.0.0.1:47892.service - OpenSSH per-connection server daemon (10.0.0.1:47892). Mar 17 17:34:56.688888 sshd[4558]: Accepted publickey for core from 10.0.0.1 port 47892 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:34:56.690123 sshd-session[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:34:56.694019 systemd-logind[1468]: New session 37 of user core. Mar 17 17:34:56.703908 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 17 17:34:56.829287 sshd[4560]: Connection closed by 10.0.0.1 port 47892 Mar 17 17:34:56.829849 sshd-session[4558]: pam_unix(sshd:session): session closed for user core Mar 17 17:34:56.833068 systemd[1]: sshd@36-10.0.0.45:22-10.0.0.1:47892.service: Deactivated successfully. Mar 17 17:34:56.835212 systemd[1]: session-37.scope: Deactivated successfully. Mar 17 17:34:56.835852 systemd-logind[1468]: Session 37 logged out. Waiting for processes to exit. Mar 17 17:34:56.836881 systemd-logind[1468]: Removed session 37. Mar 17 17:34:57.333527 kubelet[2658]: E0317 17:34:57.333249 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:34:57.333910 kubelet[2658]: E0317 17:34:57.333579 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:01.846433 systemd[1]: Started sshd@37-10.0.0.45:22-10.0.0.1:47904.service - OpenSSH per-connection server daemon (10.0.0.1:47904). Mar 17 17:35:01.889569 sshd[4577]: Accepted publickey for core from 10.0.0.1 port 47904 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:01.890946 sshd-session[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:01.895343 systemd-logind[1468]: New session 38 of user core. Mar 17 17:35:01.901943 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 17 17:35:02.016779 sshd[4579]: Connection closed by 10.0.0.1 port 47904 Mar 17 17:35:02.016292 sshd-session[4577]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:02.019711 systemd[1]: sshd@37-10.0.0.45:22-10.0.0.1:47904.service: Deactivated successfully. Mar 17 17:35:02.021472 systemd[1]: session-38.scope: Deactivated successfully. Mar 17 17:35:02.025495 systemd-logind[1468]: Session 38 logged out. Waiting for processes to exit. Mar 17 17:35:02.026957 systemd-logind[1468]: Removed session 38. Mar 17 17:35:07.038029 systemd[1]: Started sshd@38-10.0.0.45:22-10.0.0.1:44134.service - OpenSSH per-connection server daemon (10.0.0.1:44134). Mar 17 17:35:07.078151 sshd[4594]: Accepted publickey for core from 10.0.0.1 port 44134 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:07.079231 sshd-session[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:07.083667 systemd-logind[1468]: New session 39 of user core. Mar 17 17:35:07.095943 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 17 17:35:07.209180 sshd[4596]: Connection closed by 10.0.0.1 port 44134 Mar 17 17:35:07.209647 sshd-session[4594]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:07.213410 systemd[1]: sshd@38-10.0.0.45:22-10.0.0.1:44134.service: Deactivated successfully. Mar 17 17:35:07.216304 systemd[1]: session-39.scope: Deactivated successfully. Mar 17 17:35:07.218368 systemd-logind[1468]: Session 39 logged out. Waiting for processes to exit. Mar 17 17:35:07.219184 systemd-logind[1468]: Removed session 39. Mar 17 17:35:07.333656 kubelet[2658]: E0317 17:35:07.333350 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:12.228042 systemd[1]: Started sshd@39-10.0.0.45:22-10.0.0.1:44146.service - OpenSSH per-connection server daemon (10.0.0.1:44146). Mar 17 17:35:12.268834 sshd[4612]: Accepted publickey for core from 10.0.0.1 port 44146 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:12.269883 sshd-session[4612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:12.273870 systemd-logind[1468]: New session 40 of user core. Mar 17 17:35:12.283954 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 17 17:35:12.390228 sshd[4614]: Connection closed by 10.0.0.1 port 44146 Mar 17 17:35:12.390763 sshd-session[4612]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:12.400943 systemd[1]: sshd@39-10.0.0.45:22-10.0.0.1:44146.service: Deactivated successfully. Mar 17 17:35:12.403255 systemd[1]: session-40.scope: Deactivated successfully. Mar 17 17:35:12.404170 systemd-logind[1468]: Session 40 logged out. Waiting for processes to exit. Mar 17 17:35:12.414048 systemd[1]: Started sshd@40-10.0.0.45:22-10.0.0.1:44152.service - OpenSSH per-connection server daemon (10.0.0.1:44152). Mar 17 17:35:12.415454 systemd-logind[1468]: Removed session 40. Mar 17 17:35:12.451460 sshd[4627]: Accepted publickey for core from 10.0.0.1 port 44152 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:12.452527 sshd-session[4627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:12.456593 systemd-logind[1468]: New session 41 of user core. Mar 17 17:35:12.466887 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 17 17:35:12.615227 sshd[4630]: Connection closed by 10.0.0.1 port 44152 Mar 17 17:35:12.615775 sshd-session[4627]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:12.631362 systemd[1]: Started sshd@41-10.0.0.45:22-10.0.0.1:55450.service - OpenSSH per-connection server daemon (10.0.0.1:55450). Mar 17 17:35:12.631827 systemd[1]: sshd@40-10.0.0.45:22-10.0.0.1:44152.service: Deactivated successfully. Mar 17 17:35:12.638302 systemd[1]: session-41.scope: Deactivated successfully. Mar 17 17:35:12.641302 systemd-logind[1468]: Session 41 logged out. Waiting for processes to exit. Mar 17 17:35:12.644292 systemd-logind[1468]: Removed session 41. Mar 17 17:35:12.683926 sshd[4639]: Accepted publickey for core from 10.0.0.1 port 55450 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:12.685358 sshd-session[4639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:12.689106 systemd-logind[1468]: New session 42 of user core. Mar 17 17:35:12.697867 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 17 17:35:12.812511 sshd[4644]: Connection closed by 10.0.0.1 port 55450 Mar 17 17:35:12.812869 sshd-session[4639]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:12.815879 systemd[1]: sshd@41-10.0.0.45:22-10.0.0.1:55450.service: Deactivated successfully. Mar 17 17:35:12.817571 systemd[1]: session-42.scope: Deactivated successfully. Mar 17 17:35:12.818238 systemd-logind[1468]: Session 42 logged out. Waiting for processes to exit. Mar 17 17:35:12.819080 systemd-logind[1468]: Removed session 42. Mar 17 17:35:17.825136 systemd[1]: Started sshd@42-10.0.0.45:22-10.0.0.1:55464.service - OpenSSH per-connection server daemon (10.0.0.1:55464). Mar 17 17:35:17.870287 sshd[4659]: Accepted publickey for core from 10.0.0.1 port 55464 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:17.871775 sshd-session[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:17.875976 systemd-logind[1468]: New session 43 of user core. Mar 17 17:35:17.888915 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 17 17:35:18.002686 sshd[4661]: Connection closed by 10.0.0.1 port 55464 Mar 17 17:35:18.003172 sshd-session[4659]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:18.018061 systemd[1]: sshd@42-10.0.0.45:22-10.0.0.1:55464.service: Deactivated successfully. Mar 17 17:35:18.019832 systemd[1]: session-43.scope: Deactivated successfully. Mar 17 17:35:18.021862 systemd-logind[1468]: Session 43 logged out. Waiting for processes to exit. Mar 17 17:35:18.023176 systemd[1]: Started sshd@43-10.0.0.45:22-10.0.0.1:55476.service - OpenSSH per-connection server daemon (10.0.0.1:55476). Mar 17 17:35:18.023828 systemd-logind[1468]: Removed session 43. Mar 17 17:35:18.066222 sshd[4673]: Accepted publickey for core from 10.0.0.1 port 55476 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:18.067477 sshd-session[4673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:18.071454 systemd-logind[1468]: New session 44 of user core. Mar 17 17:35:18.083941 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 17 17:35:18.331711 sshd[4676]: Connection closed by 10.0.0.1 port 55476 Mar 17 17:35:18.331612 sshd-session[4673]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:18.345402 systemd[1]: sshd@43-10.0.0.45:22-10.0.0.1:55476.service: Deactivated successfully. Mar 17 17:35:18.349061 systemd[1]: session-44.scope: Deactivated successfully. Mar 17 17:35:18.351095 systemd-logind[1468]: Session 44 logged out. Waiting for processes to exit. Mar 17 17:35:18.360089 systemd[1]: Started sshd@44-10.0.0.45:22-10.0.0.1:55492.service - OpenSSH per-connection server daemon (10.0.0.1:55492). Mar 17 17:35:18.361557 systemd-logind[1468]: Removed session 44. Mar 17 17:35:18.408079 sshd[4686]: Accepted publickey for core from 10.0.0.1 port 55492 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:18.409406 sshd-session[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:18.414923 systemd-logind[1468]: New session 45 of user core. Mar 17 17:35:18.429945 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 17 17:35:19.641645 sshd[4689]: Connection closed by 10.0.0.1 port 55492 Mar 17 17:35:19.643344 sshd-session[4686]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:19.657930 systemd[1]: sshd@44-10.0.0.45:22-10.0.0.1:55492.service: Deactivated successfully. Mar 17 17:35:19.661976 systemd[1]: session-45.scope: Deactivated successfully. Mar 17 17:35:19.663218 systemd-logind[1468]: Session 45 logged out. Waiting for processes to exit. Mar 17 17:35:19.673495 systemd[1]: Started sshd@45-10.0.0.45:22-10.0.0.1:55494.service - OpenSSH per-connection server daemon (10.0.0.1:55494). Mar 17 17:35:19.675030 systemd-logind[1468]: Removed session 45. Mar 17 17:35:19.714752 sshd[4707]: Accepted publickey for core from 10.0.0.1 port 55494 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:19.715873 sshd-session[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:19.720278 systemd-logind[1468]: New session 46 of user core. Mar 17 17:35:19.726859 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 17 17:35:19.939058 sshd[4710]: Connection closed by 10.0.0.1 port 55494 Mar 17 17:35:19.939189 sshd-session[4707]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:19.947095 systemd[1]: sshd@45-10.0.0.45:22-10.0.0.1:55494.service: Deactivated successfully. Mar 17 17:35:19.948670 systemd[1]: session-46.scope: Deactivated successfully. Mar 17 17:35:19.950500 systemd-logind[1468]: Session 46 logged out. Waiting for processes to exit. Mar 17 17:35:19.957969 systemd[1]: Started sshd@46-10.0.0.45:22-10.0.0.1:55502.service - OpenSSH per-connection server daemon (10.0.0.1:55502). Mar 17 17:35:19.958891 systemd-logind[1468]: Removed session 46. Mar 17 17:35:19.995343 sshd[4721]: Accepted publickey for core from 10.0.0.1 port 55502 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:19.996489 sshd-session[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:20.000208 systemd-logind[1468]: New session 47 of user core. Mar 17 17:35:20.012889 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 17 17:35:20.121288 sshd[4724]: Connection closed by 10.0.0.1 port 55502 Mar 17 17:35:20.121625 sshd-session[4721]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:20.124638 systemd[1]: sshd@46-10.0.0.45:22-10.0.0.1:55502.service: Deactivated successfully. Mar 17 17:35:20.126285 systemd[1]: session-47.scope: Deactivated successfully. Mar 17 17:35:20.126934 systemd-logind[1468]: Session 47 logged out. Waiting for processes to exit. Mar 17 17:35:20.128002 systemd-logind[1468]: Removed session 47. Mar 17 17:35:25.136108 systemd[1]: Started sshd@47-10.0.0.45:22-10.0.0.1:57816.service - OpenSSH per-connection server daemon (10.0.0.1:57816). Mar 17 17:35:25.180916 sshd[4741]: Accepted publickey for core from 10.0.0.1 port 57816 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:25.182184 sshd-session[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:25.186682 systemd-logind[1468]: New session 48 of user core. Mar 17 17:35:25.194933 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 17 17:35:25.307318 sshd[4745]: Connection closed by 10.0.0.1 port 57816 Mar 17 17:35:25.306841 sshd-session[4741]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:25.309839 systemd[1]: sshd@47-10.0.0.45:22-10.0.0.1:57816.service: Deactivated successfully. Mar 17 17:35:25.313298 systemd[1]: session-48.scope: Deactivated successfully. Mar 17 17:35:25.313958 systemd-logind[1468]: Session 48 logged out. Waiting for processes to exit. Mar 17 17:35:25.314681 systemd-logind[1468]: Removed session 48. Mar 17 17:35:27.332781 kubelet[2658]: E0317 17:35:27.332645 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:30.322238 systemd[1]: Started sshd@48-10.0.0.45:22-10.0.0.1:57820.service - OpenSSH per-connection server daemon (10.0.0.1:57820). Mar 17 17:35:30.364499 sshd[4758]: Accepted publickey for core from 10.0.0.1 port 57820 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:30.365681 sshd-session[4758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:30.369799 systemd-logind[1468]: New session 49 of user core. Mar 17 17:35:30.382914 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 17 17:35:30.492079 sshd[4760]: Connection closed by 10.0.0.1 port 57820 Mar 17 17:35:30.492409 sshd-session[4758]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:30.495264 systemd[1]: sshd@48-10.0.0.45:22-10.0.0.1:57820.service: Deactivated successfully. Mar 17 17:35:30.496996 systemd[1]: session-49.scope: Deactivated successfully. Mar 17 17:35:30.498923 systemd-logind[1468]: Session 49 logged out. Waiting for processes to exit. Mar 17 17:35:30.500241 systemd-logind[1468]: Removed session 49. Mar 17 17:35:31.336137 kubelet[2658]: E0317 17:35:31.336087 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:35.504096 systemd[1]: Started sshd@49-10.0.0.45:22-10.0.0.1:53038.service - OpenSSH per-connection server daemon (10.0.0.1:53038). Mar 17 17:35:35.545332 sshd[4774]: Accepted publickey for core from 10.0.0.1 port 53038 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:35.546555 sshd-session[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:35.550368 systemd-logind[1468]: New session 50 of user core. Mar 17 17:35:35.556880 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 17 17:35:35.665839 sshd[4776]: Connection closed by 10.0.0.1 port 53038 Mar 17 17:35:35.666309 sshd-session[4774]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:35.682787 systemd[1]: sshd@49-10.0.0.45:22-10.0.0.1:53038.service: Deactivated successfully. Mar 17 17:35:35.685999 systemd[1]: session-50.scope: Deactivated successfully. Mar 17 17:35:35.687650 systemd-logind[1468]: Session 50 logged out. Waiting for processes to exit. Mar 17 17:35:35.696000 systemd[1]: Started sshd@50-10.0.0.45:22-10.0.0.1:53052.service - OpenSSH per-connection server daemon (10.0.0.1:53052). Mar 17 17:35:35.697338 systemd-logind[1468]: Removed session 50. Mar 17 17:35:35.733945 sshd[4789]: Accepted publickey for core from 10.0.0.1 port 53052 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:35.735075 sshd-session[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:35.738790 systemd-logind[1468]: New session 51 of user core. Mar 17 17:35:35.753942 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 17 17:35:38.760281 containerd[1483]: time="2025-03-17T17:35:38.760233529Z" level=info msg="StopContainer for \"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348\" with timeout 30 (s)" Mar 17 17:35:38.761789 containerd[1483]: time="2025-03-17T17:35:38.761758101Z" level=info msg="Stop container \"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348\" with signal terminated" Mar 17 17:35:38.771315 systemd[1]: cri-containerd-2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348.scope: Deactivated successfully. Mar 17 17:35:38.791067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348-rootfs.mount: Deactivated successfully. Mar 17 17:35:38.811000 containerd[1483]: time="2025-03-17T17:35:38.810872701Z" level=info msg="StopContainer for \"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946\" with timeout 2 (s)" Mar 17 17:35:38.811715 containerd[1483]: time="2025-03-17T17:35:38.811511072Z" level=info msg="Stop container \"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946\" with signal terminated" Mar 17 17:35:38.815740 containerd[1483]: time="2025-03-17T17:35:38.815343020Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:35:38.816572 containerd[1483]: time="2025-03-17T17:35:38.816530247Z" level=info msg="shim disconnected" id=2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348 namespace=k8s.io Mar 17 17:35:38.816716 containerd[1483]: time="2025-03-17T17:35:38.816696360Z" level=warning msg="cleaning up after shim disconnected" id=2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348 namespace=k8s.io Mar 17 17:35:38.816814 containerd[1483]: time="2025-03-17T17:35:38.816798635Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:35:38.821499 systemd-networkd[1411]: lxc_health: Link DOWN Mar 17 17:35:38.821806 systemd-networkd[1411]: lxc_health: Lost carrier Mar 17 17:35:38.838577 systemd[1]: cri-containerd-894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946.scope: Deactivated successfully. Mar 17 17:35:38.839358 systemd[1]: cri-containerd-894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946.scope: Consumed 6.927s CPU time, 124.8M memory peak, 148K read from disk, 12.9M written to disk. Mar 17 17:35:38.855072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946-rootfs.mount: Deactivated successfully. Mar 17 17:35:38.856671 containerd[1483]: time="2025-03-17T17:35:38.856495617Z" level=info msg="StopContainer for \"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348\" returns successfully" Mar 17 17:35:38.858382 containerd[1483]: time="2025-03-17T17:35:38.858223979Z" level=info msg="StopPodSandbox for \"a7459a070d82efac7ef1d2ce611ae4b57bdf9725907ce3d0a83fd202bed60933\"" Mar 17 17:35:38.858382 containerd[1483]: time="2025-03-17T17:35:38.858267337Z" level=info msg="Container to stop \"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:35:38.860413 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7459a070d82efac7ef1d2ce611ae4b57bdf9725907ce3d0a83fd202bed60933-shm.mount: Deactivated successfully. Mar 17 17:35:38.863320 containerd[1483]: time="2025-03-17T17:35:38.863273593Z" level=info msg="shim disconnected" id=894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946 namespace=k8s.io Mar 17 17:35:38.863527 containerd[1483]: time="2025-03-17T17:35:38.863444665Z" level=warning msg="cleaning up after shim disconnected" id=894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946 namespace=k8s.io Mar 17 17:35:38.863527 containerd[1483]: time="2025-03-17T17:35:38.863465344Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:35:38.865174 systemd[1]: cri-containerd-a7459a070d82efac7ef1d2ce611ae4b57bdf9725907ce3d0a83fd202bed60933.scope: Deactivated successfully. Mar 17 17:35:38.883969 containerd[1483]: time="2025-03-17T17:35:38.883844911Z" level=info msg="StopContainer for \"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946\" returns successfully" Mar 17 17:35:38.884467 containerd[1483]: time="2025-03-17T17:35:38.884349369Z" level=info msg="StopPodSandbox for \"ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6\"" Mar 17 17:35:38.884467 containerd[1483]: time="2025-03-17T17:35:38.884389287Z" level=info msg="Container to stop \"270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:35:38.884467 containerd[1483]: time="2025-03-17T17:35:38.884402526Z" level=info msg="Container to stop \"91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:35:38.884467 containerd[1483]: time="2025-03-17T17:35:38.884412366Z" level=info msg="Container to stop \"707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:35:38.884467 containerd[1483]: time="2025-03-17T17:35:38.884421605Z" level=info msg="Container to stop \"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:35:38.884467 containerd[1483]: time="2025-03-17T17:35:38.884433005Z" level=info msg="Container to stop \"ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:35:38.886503 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6-shm.mount: Deactivated successfully. Mar 17 17:35:38.890102 systemd[1]: cri-containerd-ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6.scope: Deactivated successfully. Mar 17 17:35:38.906941 containerd[1483]: time="2025-03-17T17:35:38.906874120Z" level=info msg="shim disconnected" id=a7459a070d82efac7ef1d2ce611ae4b57bdf9725907ce3d0a83fd202bed60933 namespace=k8s.io Mar 17 17:35:38.906941 containerd[1483]: time="2025-03-17T17:35:38.906935517Z" level=warning msg="cleaning up after shim disconnected" id=a7459a070d82efac7ef1d2ce611ae4b57bdf9725907ce3d0a83fd202bed60933 namespace=k8s.io Mar 17 17:35:38.906941 containerd[1483]: time="2025-03-17T17:35:38.906945116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:35:38.918429 containerd[1483]: time="2025-03-17T17:35:38.918246690Z" level=info msg="shim disconnected" id=ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6 namespace=k8s.io Mar 17 17:35:38.918429 containerd[1483]: time="2025-03-17T17:35:38.918297528Z" level=warning msg="cleaning up after shim disconnected" id=ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6 namespace=k8s.io Mar 17 17:35:38.918429 containerd[1483]: time="2025-03-17T17:35:38.918305207Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:35:38.919894 containerd[1483]: time="2025-03-17T17:35:38.919838379Z" level=info msg="TearDown network for sandbox \"a7459a070d82efac7ef1d2ce611ae4b57bdf9725907ce3d0a83fd202bed60933\" successfully" Mar 17 17:35:38.919894 containerd[1483]: time="2025-03-17T17:35:38.919870777Z" level=info msg="StopPodSandbox for \"a7459a070d82efac7ef1d2ce611ae4b57bdf9725907ce3d0a83fd202bed60933\" returns successfully" Mar 17 17:35:38.931970 containerd[1483]: time="2025-03-17T17:35:38.931934037Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:35:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:35:38.934522 containerd[1483]: time="2025-03-17T17:35:38.934323810Z" level=info msg="TearDown network for sandbox \"ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6\" successfully" Mar 17 17:35:38.934522 containerd[1483]: time="2025-03-17T17:35:38.934359208Z" level=info msg="StopPodSandbox for \"ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6\" returns successfully" Mar 17 17:35:39.068023 kubelet[2658]: I0317 17:35:39.067169 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-host-proc-sys-kernel\") pod \"c33e36dc-9fca-4745-89ed-1c5048f53be4\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " Mar 17 17:35:39.068366 kubelet[2658]: I0317 17:35:39.068023 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c33e36dc-9fca-4745-89ed-1c5048f53be4-cilium-config-path\") pod \"c33e36dc-9fca-4745-89ed-1c5048f53be4\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " Mar 17 17:35:39.068366 kubelet[2658]: I0317 17:35:39.068047 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-bpf-maps\") pod \"c33e36dc-9fca-4745-89ed-1c5048f53be4\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " Mar 17 17:35:39.068366 kubelet[2658]: I0317 17:35:39.068064 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-cilium-run\") pod \"c33e36dc-9fca-4745-89ed-1c5048f53be4\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " Mar 17 17:35:39.068366 kubelet[2658]: I0317 17:35:39.068081 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-cilium-cgroup\") pod \"c33e36dc-9fca-4745-89ed-1c5048f53be4\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " Mar 17 17:35:39.068366 kubelet[2658]: I0317 17:35:39.068095 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-host-proc-sys-net\") pod \"c33e36dc-9fca-4745-89ed-1c5048f53be4\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " Mar 17 17:35:39.068366 kubelet[2658]: I0317 17:35:39.068109 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-xtables-lock\") pod \"c33e36dc-9fca-4745-89ed-1c5048f53be4\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " Mar 17 17:35:39.068505 kubelet[2658]: I0317 17:35:39.068126 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-etc-cni-netd\") pod \"c33e36dc-9fca-4745-89ed-1c5048f53be4\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " Mar 17 17:35:39.068505 kubelet[2658]: I0317 17:35:39.068140 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-lib-modules\") pod \"c33e36dc-9fca-4745-89ed-1c5048f53be4\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " Mar 17 17:35:39.068505 kubelet[2658]: I0317 17:35:39.068157 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b28dt\" (UniqueName: \"kubernetes.io/projected/c33e36dc-9fca-4745-89ed-1c5048f53be4-kube-api-access-b28dt\") pod \"c33e36dc-9fca-4745-89ed-1c5048f53be4\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " Mar 17 17:35:39.068505 kubelet[2658]: I0317 17:35:39.068171 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-cni-path\") pod \"c33e36dc-9fca-4745-89ed-1c5048f53be4\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " Mar 17 17:35:39.068505 kubelet[2658]: I0317 17:35:39.068185 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-hostproc\") pod \"c33e36dc-9fca-4745-89ed-1c5048f53be4\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " Mar 17 17:35:39.068505 kubelet[2658]: I0317 17:35:39.068199 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tkbj\" (UniqueName: \"kubernetes.io/projected/b2d86a1a-9477-4724-8766-30af5d545a54-kube-api-access-9tkbj\") pod \"b2d86a1a-9477-4724-8766-30af5d545a54\" (UID: \"b2d86a1a-9477-4724-8766-30af5d545a54\") " Mar 17 17:35:39.068620 kubelet[2658]: I0317 17:35:39.068215 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2d86a1a-9477-4724-8766-30af5d545a54-cilium-config-path\") pod \"b2d86a1a-9477-4724-8766-30af5d545a54\" (UID: \"b2d86a1a-9477-4724-8766-30af5d545a54\") " Mar 17 17:35:39.068620 kubelet[2658]: I0317 17:35:39.068231 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c33e36dc-9fca-4745-89ed-1c5048f53be4-hubble-tls\") pod \"c33e36dc-9fca-4745-89ed-1c5048f53be4\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " Mar 17 17:35:39.068620 kubelet[2658]: I0317 17:35:39.068249 2658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c33e36dc-9fca-4745-89ed-1c5048f53be4-clustermesh-secrets\") pod \"c33e36dc-9fca-4745-89ed-1c5048f53be4\" (UID: \"c33e36dc-9fca-4745-89ed-1c5048f53be4\") " Mar 17 17:35:39.072681 kubelet[2658]: I0317 17:35:39.071881 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c33e36dc-9fca-4745-89ed-1c5048f53be4" (UID: "c33e36dc-9fca-4745-89ed-1c5048f53be4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:35:39.072681 kubelet[2658]: I0317 17:35:39.071883 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c33e36dc-9fca-4745-89ed-1c5048f53be4" (UID: "c33e36dc-9fca-4745-89ed-1c5048f53be4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:35:39.073868 kubelet[2658]: I0317 17:35:39.073841 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c33e36dc-9fca-4745-89ed-1c5048f53be4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c33e36dc-9fca-4745-89ed-1c5048f53be4" (UID: "c33e36dc-9fca-4745-89ed-1c5048f53be4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:35:39.073938 kubelet[2658]: I0317 17:35:39.073886 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c33e36dc-9fca-4745-89ed-1c5048f53be4" (UID: "c33e36dc-9fca-4745-89ed-1c5048f53be4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:35:39.073938 kubelet[2658]: I0317 17:35:39.073903 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c33e36dc-9fca-4745-89ed-1c5048f53be4" (UID: "c33e36dc-9fca-4745-89ed-1c5048f53be4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:35:39.073938 kubelet[2658]: I0317 17:35:39.073917 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c33e36dc-9fca-4745-89ed-1c5048f53be4" (UID: "c33e36dc-9fca-4745-89ed-1c5048f53be4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:35:39.073938 kubelet[2658]: I0317 17:35:39.073930 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c33e36dc-9fca-4745-89ed-1c5048f53be4" (UID: "c33e36dc-9fca-4745-89ed-1c5048f53be4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:35:39.074022 kubelet[2658]: I0317 17:35:39.073943 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c33e36dc-9fca-4745-89ed-1c5048f53be4" (UID: "c33e36dc-9fca-4745-89ed-1c5048f53be4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:35:39.074022 kubelet[2658]: I0317 17:35:39.073952 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-hostproc" (OuterVolumeSpecName: "hostproc") pod "c33e36dc-9fca-4745-89ed-1c5048f53be4" (UID: "c33e36dc-9fca-4745-89ed-1c5048f53be4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:35:39.074022 kubelet[2658]: I0317 17:35:39.074009 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d86a1a-9477-4724-8766-30af5d545a54-kube-api-access-9tkbj" (OuterVolumeSpecName: "kube-api-access-9tkbj") pod "b2d86a1a-9477-4724-8766-30af5d545a54" (UID: "b2d86a1a-9477-4724-8766-30af5d545a54"). InnerVolumeSpecName "kube-api-access-9tkbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:35:39.074083 kubelet[2658]: I0317 17:35:39.074047 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-cni-path" (OuterVolumeSpecName: "cni-path") pod "c33e36dc-9fca-4745-89ed-1c5048f53be4" (UID: "c33e36dc-9fca-4745-89ed-1c5048f53be4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:35:39.074342 kubelet[2658]: I0317 17:35:39.074265 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33e36dc-9fca-4745-89ed-1c5048f53be4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c33e36dc-9fca-4745-89ed-1c5048f53be4" (UID: "c33e36dc-9fca-4745-89ed-1c5048f53be4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:35:39.075129 kubelet[2658]: I0317 17:35:39.075105 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c33e36dc-9fca-4745-89ed-1c5048f53be4" (UID: "c33e36dc-9fca-4745-89ed-1c5048f53be4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:35:39.075684 kubelet[2658]: I0317 17:35:39.075656 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2d86a1a-9477-4724-8766-30af5d545a54-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b2d86a1a-9477-4724-8766-30af5d545a54" (UID: "b2d86a1a-9477-4724-8766-30af5d545a54"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:35:39.076223 kubelet[2658]: I0317 17:35:39.076179 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33e36dc-9fca-4745-89ed-1c5048f53be4-kube-api-access-b28dt" (OuterVolumeSpecName: "kube-api-access-b28dt") pod "c33e36dc-9fca-4745-89ed-1c5048f53be4" (UID: "c33e36dc-9fca-4745-89ed-1c5048f53be4"). InnerVolumeSpecName "kube-api-access-b28dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:35:39.076223 kubelet[2658]: I0317 17:35:39.076210 2658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33e36dc-9fca-4745-89ed-1c5048f53be4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c33e36dc-9fca-4745-89ed-1c5048f53be4" (UID: "c33e36dc-9fca-4745-89ed-1c5048f53be4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:35:39.168506 kubelet[2658]: I0317 17:35:39.168462 2658 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168506 kubelet[2658]: I0317 17:35:39.168498 2658 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168506 kubelet[2658]: I0317 17:35:39.168508 2658 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168506 kubelet[2658]: I0317 17:35:39.168517 2658 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-b28dt\" (UniqueName: \"kubernetes.io/projected/c33e36dc-9fca-4745-89ed-1c5048f53be4-kube-api-access-b28dt\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168714 kubelet[2658]: I0317 17:35:39.168526 2658 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168714 kubelet[2658]: I0317 17:35:39.168534 2658 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9tkbj\" (UniqueName: \"kubernetes.io/projected/b2d86a1a-9477-4724-8766-30af5d545a54-kube-api-access-9tkbj\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168714 kubelet[2658]: I0317 17:35:39.168542 2658 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2d86a1a-9477-4724-8766-30af5d545a54-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168714 kubelet[2658]: I0317 17:35:39.168551 2658 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c33e36dc-9fca-4745-89ed-1c5048f53be4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168714 kubelet[2658]: I0317 17:35:39.168559 2658 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c33e36dc-9fca-4745-89ed-1c5048f53be4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168714 kubelet[2658]: I0317 17:35:39.168566 2658 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168714 kubelet[2658]: I0317 17:35:39.168573 2658 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168714 kubelet[2658]: I0317 17:35:39.168581 2658 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c33e36dc-9fca-4745-89ed-1c5048f53be4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168901 kubelet[2658]: I0317 17:35:39.168589 2658 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168901 kubelet[2658]: I0317 17:35:39.168597 2658 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168901 kubelet[2658]: I0317 17:35:39.168606 2658 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.168901 kubelet[2658]: I0317 17:35:39.168613 2658 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c33e36dc-9fca-4745-89ed-1c5048f53be4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 17:35:39.341434 systemd[1]: Removed slice kubepods-besteffort-podb2d86a1a_9477_4724_8766_30af5d545a54.slice - libcontainer container kubepods-besteffort-podb2d86a1a_9477_4724_8766_30af5d545a54.slice. Mar 17 17:35:39.342571 systemd[1]: Removed slice kubepods-burstable-podc33e36dc_9fca_4745_89ed_1c5048f53be4.slice - libcontainer container kubepods-burstable-podc33e36dc_9fca_4745_89ed_1c5048f53be4.slice. Mar 17 17:35:39.342666 systemd[1]: kubepods-burstable-podc33e36dc_9fca_4745_89ed_1c5048f53be4.slice: Consumed 7.074s CPU time, 125.1M memory peak, 168K read from disk, 12.9M written to disk. Mar 17 17:35:39.413097 kubelet[2658]: E0317 17:35:39.413036 2658 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:35:39.782979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7459a070d82efac7ef1d2ce611ae4b57bdf9725907ce3d0a83fd202bed60933-rootfs.mount: Deactivated successfully. Mar 17 17:35:39.783078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad274abcc20ae1d1fdd36b2f0459659d0becc90c3a2b8941845d458b722e26d6-rootfs.mount: Deactivated successfully. Mar 17 17:35:39.783131 systemd[1]: var-lib-kubelet-pods-b2d86a1a\x2d9477\x2d4724\x2d8766\x2d30af5d545a54-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9tkbj.mount: Deactivated successfully. Mar 17 17:35:39.783186 systemd[1]: var-lib-kubelet-pods-c33e36dc\x2d9fca\x2d4745\x2d89ed\x2d1c5048f53be4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db28dt.mount: Deactivated successfully. Mar 17 17:35:39.783236 systemd[1]: var-lib-kubelet-pods-c33e36dc\x2d9fca\x2d4745\x2d89ed\x2d1c5048f53be4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:35:39.783285 systemd[1]: var-lib-kubelet-pods-c33e36dc\x2d9fca\x2d4745\x2d89ed\x2d1c5048f53be4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:35:39.788876 kubelet[2658]: I0317 17:35:39.788838 2658 scope.go:117] "RemoveContainer" containerID="894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946" Mar 17 17:35:39.790516 containerd[1483]: time="2025-03-17T17:35:39.790471902Z" level=info msg="RemoveContainer for \"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946\"" Mar 17 17:35:39.805383 containerd[1483]: time="2025-03-17T17:35:39.805335725Z" level=info msg="RemoveContainer for \"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946\" returns successfully" Mar 17 17:35:39.806757 kubelet[2658]: I0317 17:35:39.806667 2658 scope.go:117] "RemoveContainer" containerID="270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3" Mar 17 17:35:39.810014 containerd[1483]: time="2025-03-17T17:35:39.808885048Z" level=info msg="RemoveContainer for \"270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3\"" Mar 17 17:35:39.813891 containerd[1483]: time="2025-03-17T17:35:39.813849349Z" level=info msg="RemoveContainer for \"270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3\" returns successfully" Mar 17 17:35:39.814118 kubelet[2658]: I0317 17:35:39.814095 2658 scope.go:117] "RemoveContainer" containerID="707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741" Mar 17 17:35:39.816240 containerd[1483]: time="2025-03-17T17:35:39.815965135Z" level=info msg="RemoveContainer for \"707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741\"" Mar 17 17:35:39.818926 containerd[1483]: time="2025-03-17T17:35:39.818868607Z" level=info msg="RemoveContainer for \"707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741\" returns successfully" Mar 17 17:35:39.819347 kubelet[2658]: I0317 17:35:39.819098 2658 scope.go:117] "RemoveContainer" containerID="91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80" Mar 17 17:35:39.820295 containerd[1483]: time="2025-03-17T17:35:39.820250906Z" level=info msg="RemoveContainer for \"91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80\"" Mar 17 17:35:39.825443 containerd[1483]: time="2025-03-17T17:35:39.825301123Z" level=info msg="RemoveContainer for \"91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80\" returns successfully" Mar 17 17:35:39.825655 kubelet[2658]: I0317 17:35:39.825556 2658 scope.go:117] "RemoveContainer" containerID="ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7" Mar 17 17:35:39.826553 containerd[1483]: time="2025-03-17T17:35:39.826493190Z" level=info msg="RemoveContainer for \"ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7\"" Mar 17 17:35:39.828766 containerd[1483]: time="2025-03-17T17:35:39.828702333Z" level=info msg="RemoveContainer for \"ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7\" returns successfully" Mar 17 17:35:39.829268 kubelet[2658]: I0317 17:35:39.829134 2658 scope.go:117] "RemoveContainer" containerID="894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946" Mar 17 17:35:39.829642 containerd[1483]: time="2025-03-17T17:35:39.829559815Z" level=error msg="ContainerStatus for \"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946\": not found" Mar 17 17:35:39.829896 kubelet[2658]: E0317 17:35:39.829798 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946\": not found" containerID="894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946" Mar 17 17:35:39.829896 kubelet[2658]: I0317 17:35:39.829826 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946"} err="failed to get container status \"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946\": rpc error: code = NotFound desc = an error occurred when try to find container \"894812b809638601bd66110ab0da268f8c0dbae7235468a91c7f638ff9e36946\": not found" Mar 17 17:35:39.829896 kubelet[2658]: I0317 17:35:39.829867 2658 scope.go:117] "RemoveContainer" containerID="270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3" Mar 17 17:35:39.830216 containerd[1483]: time="2025-03-17T17:35:39.830037434Z" level=error msg="ContainerStatus for \"270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3\": not found" Mar 17 17:35:39.830380 kubelet[2658]: E0317 17:35:39.830328 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3\": not found" containerID="270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3" Mar 17 17:35:39.830380 kubelet[2658]: I0317 17:35:39.830347 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3"} err="failed to get container status \"270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3\": rpc error: code = NotFound desc = an error occurred when try to find container \"270786c1814bdf2fd4341b20332d87dd79f41c33ec2580ecdb8c4ae63e3cace3\": not found" Mar 17 17:35:39.830380 kubelet[2658]: I0317 17:35:39.830363 2658 scope.go:117] "RemoveContainer" containerID="707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741" Mar 17 17:35:39.830751 containerd[1483]: time="2025-03-17T17:35:39.830488614Z" level=error msg="ContainerStatus for \"707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741\": not found" Mar 17 17:35:39.830964 kubelet[2658]: E0317 17:35:39.830853 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741\": not found" containerID="707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741" Mar 17 17:35:39.830964 kubelet[2658]: I0317 17:35:39.830889 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741"} err="failed to get container status \"707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741\": rpc error: code = NotFound desc = an error occurred when try to find container \"707ebe26e7535deba83e149bfbde4aa314dbe3c141ffab587968cb5341205741\": not found" Mar 17 17:35:39.830964 kubelet[2658]: I0317 17:35:39.830905 2658 scope.go:117] "RemoveContainer" containerID="91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80" Mar 17 17:35:39.831450 containerd[1483]: time="2025-03-17T17:35:39.831195183Z" level=error msg="ContainerStatus for \"91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80\": not found" Mar 17 17:35:39.831503 kubelet[2658]: E0317 17:35:39.831322 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80\": not found" containerID="91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80" Mar 17 17:35:39.831503 kubelet[2658]: I0317 17:35:39.831370 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80"} err="failed to get container status \"91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80\": rpc error: code = NotFound desc = an error occurred when try to find container \"91774f476a0d6e20f443cdde91780403f617628123a426cef27c2b17e03d3c80\": not found" Mar 17 17:35:39.831503 kubelet[2658]: I0317 17:35:39.831385 2658 scope.go:117] "RemoveContainer" containerID="ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7" Mar 17 17:35:39.832785 containerd[1483]: time="2025-03-17T17:35:39.831795436Z" level=error msg="ContainerStatus for \"ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7\": not found" Mar 17 17:35:39.832843 kubelet[2658]: E0317 17:35:39.831919 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7\": not found" containerID="ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7" Mar 17 17:35:39.832843 kubelet[2658]: I0317 17:35:39.831942 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7"} err="failed to get container status \"ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac80104bf6a137c76cdf1bcb3c6ccbab6b083e9c78a95a086c06c9cac0ce23d7\": not found" Mar 17 17:35:39.832843 kubelet[2658]: I0317 17:35:39.831956 2658 scope.go:117] "RemoveContainer" containerID="2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348" Mar 17 17:35:39.832933 containerd[1483]: time="2025-03-17T17:35:39.832812471Z" level=info msg="RemoveContainer for \"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348\"" Mar 17 17:35:39.842985 containerd[1483]: time="2025-03-17T17:35:39.842932584Z" level=info msg="RemoveContainer for \"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348\" returns successfully" Mar 17 17:35:39.843716 kubelet[2658]: I0317 17:35:39.843668 2658 scope.go:117] "RemoveContainer" containerID="2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348" Mar 17 17:35:39.845213 containerd[1483]: time="2025-03-17T17:35:39.845152406Z" level=error msg="ContainerStatus for \"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348\": not found" Mar 17 17:35:39.845482 kubelet[2658]: E0317 17:35:39.845380 2658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348\": not found" containerID="2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348" Mar 17 17:35:39.845482 kubelet[2658]: I0317 17:35:39.845410 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348"} err="failed to get container status \"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f997e21368cdecf0b52e9e08b2a9f1c01ddc2ddbea7638caac3c64eb3495348\": not found" Mar 17 17:35:40.726816 sshd[4792]: Connection closed by 10.0.0.1 port 53052 Mar 17 17:35:40.727585 sshd-session[4789]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:40.737098 systemd[1]: sshd@50-10.0.0.45:22-10.0.0.1:53052.service: Deactivated successfully. Mar 17 17:35:40.738868 systemd[1]: session-51.scope: Deactivated successfully. Mar 17 17:35:40.739049 systemd[1]: session-51.scope: Consumed 2.361s CPU time, 27M memory peak. Mar 17 17:35:40.740894 systemd-logind[1468]: Session 51 logged out. Waiting for processes to exit. Mar 17 17:35:40.745994 systemd[1]: Started sshd@51-10.0.0.45:22-10.0.0.1:53058.service - OpenSSH per-connection server daemon (10.0.0.1:53058). Mar 17 17:35:40.747108 systemd-logind[1468]: Removed session 51. Mar 17 17:35:40.783238 sshd[4951]: Accepted publickey for core from 10.0.0.1 port 53058 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:40.784427 sshd-session[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:40.788679 systemd-logind[1468]: New session 52 of user core. Mar 17 17:35:40.795901 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 17 17:35:41.334397 kubelet[2658]: I0317 17:35:41.334359 2658 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d86a1a-9477-4724-8766-30af5d545a54" path="/var/lib/kubelet/pods/b2d86a1a-9477-4724-8766-30af5d545a54/volumes" Mar 17 17:35:41.334794 kubelet[2658]: I0317 17:35:41.334770 2658 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33e36dc-9fca-4745-89ed-1c5048f53be4" path="/var/lib/kubelet/pods/c33e36dc-9fca-4745-89ed-1c5048f53be4/volumes" Mar 17 17:35:41.871991 sshd[4954]: Connection closed by 10.0.0.1 port 53058 Mar 17 17:35:41.871901 sshd-session[4951]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:41.882635 systemd[1]: sshd@51-10.0.0.45:22-10.0.0.1:53058.service: Deactivated successfully. Mar 17 17:35:41.885365 systemd[1]: session-52.scope: Deactivated successfully. Mar 17 17:35:41.888487 kubelet[2658]: I0317 17:35:41.888387 2658 topology_manager.go:215] "Topology Admit Handler" podUID="c121157c-e4c7-4cf2-8c4c-a87c56d40911" podNamespace="kube-system" podName="cilium-dz8r2" Mar 17 17:35:41.888487 kubelet[2658]: E0317 17:35:41.888447 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c33e36dc-9fca-4745-89ed-1c5048f53be4" containerName="mount-cgroup" Mar 17 17:35:41.888487 kubelet[2658]: E0317 17:35:41.888457 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c33e36dc-9fca-4745-89ed-1c5048f53be4" containerName="apply-sysctl-overwrites" Mar 17 17:35:41.888487 kubelet[2658]: E0317 17:35:41.888463 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c33e36dc-9fca-4745-89ed-1c5048f53be4" containerName="clean-cilium-state" Mar 17 17:35:41.888487 kubelet[2658]: E0317 17:35:41.888471 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c33e36dc-9fca-4745-89ed-1c5048f53be4" containerName="cilium-agent" Mar 17 17:35:41.888487 kubelet[2658]: E0317 17:35:41.888490 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b2d86a1a-9477-4724-8766-30af5d545a54" containerName="cilium-operator" Mar 17 17:35:41.888487 kubelet[2658]: E0317 17:35:41.888497 2658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c33e36dc-9fca-4745-89ed-1c5048f53be4" containerName="mount-bpf-fs" Mar 17 17:35:41.890486 kubelet[2658]: I0317 17:35:41.888520 2658 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d86a1a-9477-4724-8766-30af5d545a54" containerName="cilium-operator" Mar 17 17:35:41.890486 kubelet[2658]: I0317 17:35:41.888526 2658 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33e36dc-9fca-4745-89ed-1c5048f53be4" containerName="cilium-agent" Mar 17 17:35:41.892701 systemd-logind[1468]: Session 52 logged out. Waiting for processes to exit. Mar 17 17:35:41.905132 systemd[1]: Started sshd@52-10.0.0.45:22-10.0.0.1:53062.service - OpenSSH per-connection server daemon (10.0.0.1:53062). Mar 17 17:35:41.911668 systemd-logind[1468]: Removed session 52. Mar 17 17:35:41.920037 systemd[1]: Created slice kubepods-burstable-podc121157c_e4c7_4cf2_8c4c_a87c56d40911.slice - libcontainer container kubepods-burstable-podc121157c_e4c7_4cf2_8c4c_a87c56d40911.slice. Mar 17 17:35:41.948127 sshd[4965]: Accepted publickey for core from 10.0.0.1 port 53062 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:41.949422 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:41.953110 systemd-logind[1468]: New session 53 of user core. Mar 17 17:35:41.959890 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 17 17:35:42.012180 sshd[4968]: Connection closed by 10.0.0.1 port 53062 Mar 17 17:35:42.012665 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:42.026991 systemd[1]: sshd@52-10.0.0.45:22-10.0.0.1:53062.service: Deactivated successfully. Mar 17 17:35:42.028664 systemd[1]: session-53.scope: Deactivated successfully. Mar 17 17:35:42.030118 systemd-logind[1468]: Session 53 logged out. Waiting for processes to exit. Mar 17 17:35:42.031454 systemd[1]: Started sshd@53-10.0.0.45:22-10.0.0.1:53068.service - OpenSSH per-connection server daemon (10.0.0.1:53068). Mar 17 17:35:42.032209 systemd-logind[1468]: Removed session 53. Mar 17 17:35:42.073456 sshd[4974]: Accepted publickey for core from 10.0.0.1 port 53068 ssh2: RSA SHA256:5Ue/V+RoCRMkcnXRZmyQndEQOSMEwJs2XNBwCapeMHg Mar 17 17:35:42.074933 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:42.078703 systemd-logind[1468]: New session 54 of user core. Mar 17 17:35:42.088132 kubelet[2658]: I0317 17:35:42.088085 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c121157c-e4c7-4cf2-8c4c-a87c56d40911-host-proc-sys-net\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.088132 kubelet[2658]: I0317 17:35:42.088132 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c121157c-e4c7-4cf2-8c4c-a87c56d40911-host-proc-sys-kernel\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.088230 kubelet[2658]: I0317 17:35:42.088151 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c121157c-e4c7-4cf2-8c4c-a87c56d40911-xtables-lock\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.088230 kubelet[2658]: I0317 17:35:42.088167 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c121157c-e4c7-4cf2-8c4c-a87c56d40911-clustermesh-secrets\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.088230 kubelet[2658]: I0317 17:35:42.088183 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c121157c-e4c7-4cf2-8c4c-a87c56d40911-cilium-cgroup\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.088230 kubelet[2658]: I0317 17:35:42.088197 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c121157c-e4c7-4cf2-8c4c-a87c56d40911-cilium-run\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.088230 kubelet[2658]: I0317 17:35:42.088214 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql7px\" (UniqueName: \"kubernetes.io/projected/c121157c-e4c7-4cf2-8c4c-a87c56d40911-kube-api-access-ql7px\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.088331 kubelet[2658]: I0317 17:35:42.088249 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c121157c-e4c7-4cf2-8c4c-a87c56d40911-cilium-config-path\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.088331 kubelet[2658]: I0317 17:35:42.088267 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c121157c-e4c7-4cf2-8c4c-a87c56d40911-cilium-ipsec-secrets\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.088331 kubelet[2658]: I0317 17:35:42.088284 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c121157c-e4c7-4cf2-8c4c-a87c56d40911-hubble-tls\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.088331 kubelet[2658]: I0317 17:35:42.088307 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c121157c-e4c7-4cf2-8c4c-a87c56d40911-hostproc\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.088331 kubelet[2658]: I0317 17:35:42.088321 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c121157c-e4c7-4cf2-8c4c-a87c56d40911-lib-modules\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.088425 kubelet[2658]: I0317 17:35:42.088337 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c121157c-e4c7-4cf2-8c4c-a87c56d40911-bpf-maps\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.088425 kubelet[2658]: I0317 17:35:42.088351 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c121157c-e4c7-4cf2-8c4c-a87c56d40911-etc-cni-netd\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.088425 kubelet[2658]: I0317 17:35:42.088365 2658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c121157c-e4c7-4cf2-8c4c-a87c56d40911-cni-path\") pod \"cilium-dz8r2\" (UID: \"c121157c-e4c7-4cf2-8c4c-a87c56d40911\") " pod="kube-system/cilium-dz8r2" Mar 17 17:35:42.089877 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 17 17:35:42.231280 kubelet[2658]: E0317 17:35:42.231176 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:42.231929 containerd[1483]: time="2025-03-17T17:35:42.231879273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dz8r2,Uid:c121157c-e4c7-4cf2-8c4c-a87c56d40911,Namespace:kube-system,Attempt:0,}" Mar 17 17:35:42.266469 containerd[1483]: time="2025-03-17T17:35:42.266038905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:35:42.266469 containerd[1483]: time="2025-03-17T17:35:42.266444008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:35:42.266469 containerd[1483]: time="2025-03-17T17:35:42.266457007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:35:42.266636 containerd[1483]: time="2025-03-17T17:35:42.266542884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:35:42.285973 systemd[1]: Started cri-containerd-062376f3515aa643c041bf4e63dd3caedde3a9dc6e855cf41305c76923ec773c.scope - libcontainer container 062376f3515aa643c041bf4e63dd3caedde3a9dc6e855cf41305c76923ec773c. Mar 17 17:35:42.314835 containerd[1483]: time="2025-03-17T17:35:42.314798559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dz8r2,Uid:c121157c-e4c7-4cf2-8c4c-a87c56d40911,Namespace:kube-system,Attempt:0,} returns sandbox id \"062376f3515aa643c041bf4e63dd3caedde3a9dc6e855cf41305c76923ec773c\"" Mar 17 17:35:42.315756 kubelet[2658]: E0317 17:35:42.315547 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:42.318414 containerd[1483]: time="2025-03-17T17:35:42.317412248Z" level=info msg="CreateContainer within sandbox \"062376f3515aa643c041bf4e63dd3caedde3a9dc6e855cf41305c76923ec773c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:35:42.327070 containerd[1483]: time="2025-03-17T17:35:42.327031440Z" level=info msg="CreateContainer within sandbox \"062376f3515aa643c041bf4e63dd3caedde3a9dc6e855cf41305c76923ec773c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0c1451593cf544b333a0ca30450d89aac908b4345ec9cd6724760762ec3ec3ef\"" Mar 17 17:35:42.328885 containerd[1483]: time="2025-03-17T17:35:42.328071116Z" level=info msg="StartContainer for \"0c1451593cf544b333a0ca30450d89aac908b4345ec9cd6724760762ec3ec3ef\"" Mar 17 17:35:42.350903 systemd[1]: Started cri-containerd-0c1451593cf544b333a0ca30450d89aac908b4345ec9cd6724760762ec3ec3ef.scope - libcontainer container 0c1451593cf544b333a0ca30450d89aac908b4345ec9cd6724760762ec3ec3ef. Mar 17 17:35:42.372984 containerd[1483]: time="2025-03-17T17:35:42.372942255Z" level=info msg="StartContainer for \"0c1451593cf544b333a0ca30450d89aac908b4345ec9cd6724760762ec3ec3ef\" returns successfully" Mar 17 17:35:42.381281 systemd[1]: cri-containerd-0c1451593cf544b333a0ca30450d89aac908b4345ec9cd6724760762ec3ec3ef.scope: Deactivated successfully. Mar 17 17:35:42.409438 containerd[1483]: time="2025-03-17T17:35:42.409283354Z" level=info msg="shim disconnected" id=0c1451593cf544b333a0ca30450d89aac908b4345ec9cd6724760762ec3ec3ef namespace=k8s.io Mar 17 17:35:42.409834 containerd[1483]: time="2025-03-17T17:35:42.409645659Z" level=warning msg="cleaning up after shim disconnected" id=0c1451593cf544b333a0ca30450d89aac908b4345ec9cd6724760762ec3ec3ef namespace=k8s.io Mar 17 17:35:42.409834 containerd[1483]: time="2025-03-17T17:35:42.409665378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:35:42.799656 kubelet[2658]: E0317 17:35:42.799629 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:42.803663 containerd[1483]: time="2025-03-17T17:35:42.803365253Z" level=info msg="CreateContainer within sandbox \"062376f3515aa643c041bf4e63dd3caedde3a9dc6e855cf41305c76923ec773c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:35:42.813618 containerd[1483]: time="2025-03-17T17:35:42.813563821Z" level=info msg="CreateContainer within sandbox \"062376f3515aa643c041bf4e63dd3caedde3a9dc6e855cf41305c76923ec773c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6f1fdae677f640cf66e2998107373ebe5e38ad02d940f14a65c9927b26d03b8a\"" Mar 17 17:35:42.814579 containerd[1483]: time="2025-03-17T17:35:42.814150676Z" level=info msg="StartContainer for \"6f1fdae677f640cf66e2998107373ebe5e38ad02d940f14a65c9927b26d03b8a\"" Mar 17 17:35:42.835881 systemd[1]: Started cri-containerd-6f1fdae677f640cf66e2998107373ebe5e38ad02d940f14a65c9927b26d03b8a.scope - libcontainer container 6f1fdae677f640cf66e2998107373ebe5e38ad02d940f14a65c9927b26d03b8a. Mar 17 17:35:42.855287 containerd[1483]: time="2025-03-17T17:35:42.855238175Z" level=info msg="StartContainer for \"6f1fdae677f640cf66e2998107373ebe5e38ad02d940f14a65c9927b26d03b8a\" returns successfully" Mar 17 17:35:42.862945 systemd[1]: cri-containerd-6f1fdae677f640cf66e2998107373ebe5e38ad02d940f14a65c9927b26d03b8a.scope: Deactivated successfully. Mar 17 17:35:42.881758 containerd[1483]: time="2025-03-17T17:35:42.881685934Z" level=info msg="shim disconnected" id=6f1fdae677f640cf66e2998107373ebe5e38ad02d940f14a65c9927b26d03b8a namespace=k8s.io Mar 17 17:35:42.881758 containerd[1483]: time="2025-03-17T17:35:42.881750571Z" level=warning msg="cleaning up after shim disconnected" id=6f1fdae677f640cf66e2998107373ebe5e38ad02d940f14a65c9927b26d03b8a namespace=k8s.io Mar 17 17:35:42.881758 containerd[1483]: time="2025-03-17T17:35:42.881759171Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:35:43.281889 kubelet[2658]: I0317 17:35:43.281843 2658 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:35:43Z","lastTransitionTime":"2025-03-17T17:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:35:43.802545 kubelet[2658]: E0317 17:35:43.802501 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:43.806829 containerd[1483]: time="2025-03-17T17:35:43.806786718Z" level=info msg="CreateContainer within sandbox \"062376f3515aa643c041bf4e63dd3caedde3a9dc6e855cf41305c76923ec773c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:35:43.825223 containerd[1483]: time="2025-03-17T17:35:43.825173349Z" level=info msg="CreateContainer within sandbox \"062376f3515aa643c041bf4e63dd3caedde3a9dc6e855cf41305c76923ec773c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1e2c1811f70a655970e2e4c9257488d1549f476f0426c536b60f975e0fad6575\"" Mar 17 17:35:43.826025 containerd[1483]: time="2025-03-17T17:35:43.825996915Z" level=info msg="StartContainer for \"1e2c1811f70a655970e2e4c9257488d1549f476f0426c536b60f975e0fad6575\"" Mar 17 17:35:43.852927 systemd[1]: Started cri-containerd-1e2c1811f70a655970e2e4c9257488d1549f476f0426c536b60f975e0fad6575.scope - libcontainer container 1e2c1811f70a655970e2e4c9257488d1549f476f0426c536b60f975e0fad6575. Mar 17 17:35:43.879091 containerd[1483]: time="2025-03-17T17:35:43.879044977Z" level=info msg="StartContainer for \"1e2c1811f70a655970e2e4c9257488d1549f476f0426c536b60f975e0fad6575\" returns successfully" Mar 17 17:35:43.880116 systemd[1]: cri-containerd-1e2c1811f70a655970e2e4c9257488d1549f476f0426c536b60f975e0fad6575.scope: Deactivated successfully. Mar 17 17:35:43.907868 containerd[1483]: time="2025-03-17T17:35:43.907795936Z" level=info msg="shim disconnected" id=1e2c1811f70a655970e2e4c9257488d1549f476f0426c536b60f975e0fad6575 namespace=k8s.io Mar 17 17:35:43.907868 containerd[1483]: time="2025-03-17T17:35:43.907853293Z" level=warning msg="cleaning up after shim disconnected" id=1e2c1811f70a655970e2e4c9257488d1549f476f0426c536b60f975e0fad6575 namespace=k8s.io Mar 17 17:35:43.907868 containerd[1483]: time="2025-03-17T17:35:43.907862453Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:35:44.195383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e2c1811f70a655970e2e4c9257488d1549f476f0426c536b60f975e0fad6575-rootfs.mount: Deactivated successfully. Mar 17 17:35:44.414631 kubelet[2658]: E0317 17:35:44.414572 2658 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:35:44.806350 kubelet[2658]: E0317 17:35:44.806310 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:44.809192 containerd[1483]: time="2025-03-17T17:35:44.809057528Z" level=info msg="CreateContainer within sandbox \"062376f3515aa643c041bf4e63dd3caedde3a9dc6e855cf41305c76923ec773c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:35:44.824885 containerd[1483]: time="2025-03-17T17:35:44.824839518Z" level=info msg="CreateContainer within sandbox \"062376f3515aa643c041bf4e63dd3caedde3a9dc6e855cf41305c76923ec773c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8d4a22aa0d83cee9a6e96b0409c854031196e4cfdabcf20b24cce0fb0488de7f\"" Mar 17 17:35:44.826423 containerd[1483]: time="2025-03-17T17:35:44.826270979Z" level=info msg="StartContainer for \"8d4a22aa0d83cee9a6e96b0409c854031196e4cfdabcf20b24cce0fb0488de7f\"" Mar 17 17:35:44.848951 systemd[1]: Started cri-containerd-8d4a22aa0d83cee9a6e96b0409c854031196e4cfdabcf20b24cce0fb0488de7f.scope - libcontainer container 8d4a22aa0d83cee9a6e96b0409c854031196e4cfdabcf20b24cce0fb0488de7f. Mar 17 17:35:44.865945 systemd[1]: cri-containerd-8d4a22aa0d83cee9a6e96b0409c854031196e4cfdabcf20b24cce0fb0488de7f.scope: Deactivated successfully. Mar 17 17:35:44.867801 containerd[1483]: time="2025-03-17T17:35:44.867497639Z" level=info msg="StartContainer for \"8d4a22aa0d83cee9a6e96b0409c854031196e4cfdabcf20b24cce0fb0488de7f\" returns successfully" Mar 17 17:35:44.885525 containerd[1483]: time="2025-03-17T17:35:44.885357743Z" level=info msg="shim disconnected" id=8d4a22aa0d83cee9a6e96b0409c854031196e4cfdabcf20b24cce0fb0488de7f namespace=k8s.io Mar 17 17:35:44.885525 containerd[1483]: time="2025-03-17T17:35:44.885406421Z" level=warning msg="cleaning up after shim disconnected" id=8d4a22aa0d83cee9a6e96b0409c854031196e4cfdabcf20b24cce0fb0488de7f namespace=k8s.io Mar 17 17:35:44.885525 containerd[1483]: time="2025-03-17T17:35:44.885415140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:35:45.195350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d4a22aa0d83cee9a6e96b0409c854031196e4cfdabcf20b24cce0fb0488de7f-rootfs.mount: Deactivated successfully. Mar 17 17:35:45.810572 kubelet[2658]: E0317 17:35:45.810543 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:45.814432 containerd[1483]: time="2025-03-17T17:35:45.814380383Z" level=info msg="CreateContainer within sandbox \"062376f3515aa643c041bf4e63dd3caedde3a9dc6e855cf41305c76923ec773c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:35:45.828444 containerd[1483]: time="2025-03-17T17:35:45.828379214Z" level=info msg="CreateContainer within sandbox \"062376f3515aa643c041bf4e63dd3caedde3a9dc6e855cf41305c76923ec773c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8c7f992537c02228ef38bcb416d4a7fc5fae05b9fb429b0cf61c7dfb093903e3\"" Mar 17 17:35:45.828801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257360989.mount: Deactivated successfully. Mar 17 17:35:45.829061 containerd[1483]: time="2025-03-17T17:35:45.828800237Z" level=info msg="StartContainer for \"8c7f992537c02228ef38bcb416d4a7fc5fae05b9fb429b0cf61c7dfb093903e3\"" Mar 17 17:35:45.854890 systemd[1]: Started cri-containerd-8c7f992537c02228ef38bcb416d4a7fc5fae05b9fb429b0cf61c7dfb093903e3.scope - libcontainer container 8c7f992537c02228ef38bcb416d4a7fc5fae05b9fb429b0cf61c7dfb093903e3. Mar 17 17:35:45.878866 containerd[1483]: time="2025-03-17T17:35:45.878826523Z" level=info msg="StartContainer for \"8c7f992537c02228ef38bcb416d4a7fc5fae05b9fb429b0cf61c7dfb093903e3\" returns successfully" Mar 17 17:35:46.159784 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:35:46.815399 kubelet[2658]: E0317 17:35:46.815348 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:46.828590 kubelet[2658]: I0317 17:35:46.828464 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dz8r2" podStartSLOduration=5.8284509700000005 podStartE2EDuration="5.82845097s" podCreationTimestamp="2025-03-17 17:35:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:35:46.828365134 +0000 UTC m=+227.595962816" watchObservedRunningTime="2025-03-17 17:35:46.82845097 +0000 UTC m=+227.596048612" Mar 17 17:35:48.233127 kubelet[2658]: E0317 17:35:48.233087 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:48.943171 systemd-networkd[1411]: lxc_health: Link UP Mar 17 17:35:48.943392 systemd-networkd[1411]: lxc_health: Gained carrier Mar 17 17:35:50.237494 kubelet[2658]: E0317 17:35:50.237454 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:50.822635 kubelet[2658]: E0317 17:35:50.822607 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:50.929945 systemd-networkd[1411]: lxc_health: Gained IPv6LL Mar 17 17:35:52.333498 kubelet[2658]: E0317 17:35:52.333149 2658 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:35:52.666052 systemd[1]: run-containerd-runc-k8s.io-8c7f992537c02228ef38bcb416d4a7fc5fae05b9fb429b0cf61c7dfb093903e3-runc.I2nOIB.mount: Deactivated successfully. Mar 17 17:35:54.843905 sshd[4977]: Connection closed by 10.0.0.1 port 53068 Mar 17 17:35:54.843822 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:54.847226 systemd[1]: sshd@53-10.0.0.45:22-10.0.0.1:53068.service: Deactivated successfully. Mar 17 17:35:54.850297 systemd[1]: session-54.scope: Deactivated successfully. Mar 17 17:35:54.850961 systemd-logind[1468]: Session 54 logged out. Waiting for processes to exit. Mar 17 17:35:54.851975 systemd-logind[1468]: Removed session 54.