Jan 29 11:16:20.920077 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:16:20.920098 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:37:00 -00 2025 Jan 29 11:16:20.920108 kernel: KASLR enabled Jan 29 11:16:20.920114 kernel: efi: EFI v2.7 by EDK II Jan 29 11:16:20.920119 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 29 11:16:20.920125 kernel: random: crng init done Jan 29 11:16:20.920132 kernel: secureboot: Secure boot disabled Jan 29 11:16:20.920138 kernel: ACPI: Early table checksum verification disabled Jan 29 11:16:20.920144 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 29 11:16:20.920151 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:16:20.920157 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:16:20.920163 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:16:20.920168 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:16:20.920174 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:16:20.920181 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:16:20.920189 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:16:20.920195 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:16:20.920201 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:16:20.920207 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:16:20.920214 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 11:16:20.920220 kernel: NUMA: Failed to initialise from firmware Jan 29 11:16:20.920227 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:16:20.920233 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jan 29 11:16:20.920243 kernel: Zone ranges: Jan 29 11:16:20.920249 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:16:20.920257 kernel: DMA32 empty Jan 29 11:16:20.920263 kernel: Normal empty Jan 29 11:16:20.920269 kernel: Movable zone start for each node Jan 29 11:16:20.920275 kernel: Early memory node ranges Jan 29 11:16:20.920281 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 29 11:16:20.920287 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 11:16:20.920293 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 11:16:20.920299 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 11:16:20.920305 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 11:16:20.920312 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 11:16:20.920318 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 11:16:20.920324 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:16:20.920331 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 11:16:20.920337 kernel: psci: probing for conduit method from ACPI. Jan 29 11:16:20.920343 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:16:20.920352 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:16:20.920358 kernel: psci: Trusted OS migration not required Jan 29 11:16:20.920365 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:16:20.920373 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:16:20.920379 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:16:20.920386 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:16:20.920393 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 11:16:20.920399 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:16:20.920406 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:16:20.920412 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:16:20.920419 kernel: CPU features: detected: Spectre-v4 Jan 29 11:16:20.920425 kernel: CPU features: detected: Spectre-BHB Jan 29 11:16:20.920432 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:16:20.920440 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:16:20.920446 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:16:20.920453 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:16:20.920460 kernel: alternatives: applying boot alternatives Jan 29 11:16:20.920468 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:16:20.920475 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:16:20.920494 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:16:20.920501 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:16:20.920508 kernel: Fallback order for Node 0: 0 Jan 29 11:16:20.920515 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 11:16:20.920532 kernel: Policy zone: DMA Jan 29 11:16:20.920554 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:16:20.920561 kernel: software IO TLB: area num 4. Jan 29 11:16:20.920568 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 11:16:20.920575 kernel: Memory: 2386320K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 185968K reserved, 0K cma-reserved) Jan 29 11:16:20.920581 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:16:20.920588 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:16:20.920595 kernel: rcu: RCU event tracing is enabled. Jan 29 11:16:20.920602 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:16:20.920609 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:16:20.920616 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:16:20.920623 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:16:20.920630 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:16:20.920639 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:16:20.920645 kernel: GICv3: 256 SPIs implemented Jan 29 11:16:20.920652 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:16:20.920658 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:16:20.920664 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:16:20.920671 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:16:20.920678 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:16:20.920685 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:16:20.920692 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:16:20.920698 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 11:16:20.920705 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 11:16:20.920713 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:16:20.920720 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:16:20.920726 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:16:20.920733 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:16:20.920740 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:16:20.920746 kernel: arm-pv: using stolen time PV Jan 29 11:16:20.920753 kernel: Console: colour dummy device 80x25 Jan 29 11:16:20.920760 kernel: ACPI: Core revision 20230628 Jan 29 11:16:20.920767 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:16:20.920774 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:16:20.920782 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:16:20.920788 kernel: landlock: Up and running. Jan 29 11:16:20.920795 kernel: SELinux: Initializing. Jan 29 11:16:20.920802 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:16:20.920809 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:16:20.920816 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:16:20.920822 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:16:20.920829 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:16:20.920836 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:16:20.920844 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:16:20.920851 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:16:20.920857 kernel: Remapping and enabling EFI services. Jan 29 11:16:20.920864 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:16:20.920871 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:16:20.920877 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:16:20.920891 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 11:16:20.920898 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:16:20.920905 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:16:20.920911 kernel: Detected PIPT I-cache on CPU2 Jan 29 11:16:20.920920 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 11:16:20.920927 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 11:16:20.920938 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:16:20.920946 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 11:16:20.920953 kernel: Detected PIPT I-cache on CPU3 Jan 29 11:16:20.920960 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 11:16:20.920967 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 11:16:20.920974 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:16:20.920981 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 11:16:20.920990 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:16:20.920997 kernel: SMP: Total of 4 processors activated. Jan 29 11:16:20.921004 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:16:20.921011 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:16:20.921018 kernel: CPU features: detected: Common not Private translations Jan 29 11:16:20.921025 kernel: CPU features: detected: CRC32 instructions Jan 29 11:16:20.921032 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:16:20.921039 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:16:20.921047 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:16:20.921054 kernel: CPU features: detected: Privileged Access Never Jan 29 11:16:20.921061 kernel: CPU features: detected: RAS Extension Support Jan 29 11:16:20.921069 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:16:20.921076 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:16:20.921083 kernel: alternatives: applying system-wide alternatives Jan 29 11:16:20.921090 kernel: devtmpfs: initialized Jan 29 11:16:20.921097 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:16:20.921104 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:16:20.921112 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:16:20.921119 kernel: SMBIOS 3.0.0 present. Jan 29 11:16:20.921126 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 29 11:16:20.921133 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:16:20.921140 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:16:20.921147 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:16:20.921155 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:16:20.921162 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:16:20.921169 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Jan 29 11:16:20.921177 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:16:20.921184 kernel: cpuidle: using governor menu Jan 29 11:16:20.921191 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:16:20.921198 kernel: ASID allocator initialised with 32768 entries Jan 29 11:16:20.921205 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:16:20.921212 kernel: Serial: AMBA PL011 UART driver Jan 29 11:16:20.921224 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:16:20.921231 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:16:20.921238 kernel: Modules: 508960 pages in range for PLT usage Jan 29 11:16:20.921247 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:16:20.921254 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:16:20.921261 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:16:20.921268 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:16:20.921276 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:16:20.921283 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:16:20.921290 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:16:20.921297 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:16:20.921304 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:16:20.921312 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:16:20.921319 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:16:20.921326 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:16:20.921333 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:16:20.921340 kernel: ACPI: Interpreter enabled Jan 29 11:16:20.921347 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:16:20.921354 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:16:20.921361 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:16:20.921368 kernel: printk: console [ttyAMA0] enabled Jan 29 11:16:20.921376 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:16:20.921499 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:16:20.921653 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:16:20.921723 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:16:20.921784 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:16:20.921849 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:16:20.921858 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:16:20.921869 kernel: PCI host bridge to bus 0000:00 Jan 29 11:16:20.921952 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:16:20.922015 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:16:20.922076 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:16:20.922133 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:16:20.922210 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:16:20.922289 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:16:20.922358 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 11:16:20.922422 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 11:16:20.922487 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:16:20.922589 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:16:20.922663 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 11:16:20.922728 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 11:16:20.922786 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:16:20.922847 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:16:20.922912 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:16:20.922922 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:16:20.922929 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:16:20.922936 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:16:20.922944 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:16:20.922951 kernel: iommu: Default domain type: Translated Jan 29 11:16:20.922958 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:16:20.922967 kernel: efivars: Registered efivars operations Jan 29 11:16:20.922974 kernel: vgaarb: loaded Jan 29 11:16:20.922981 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:16:20.922989 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:16:20.922996 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:16:20.923003 kernel: pnp: PnP ACPI init Jan 29 11:16:20.923073 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:16:20.923083 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:16:20.923093 kernel: NET: Registered PF_INET protocol family Jan 29 11:16:20.923100 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:16:20.923107 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:16:20.923114 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:16:20.923122 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:16:20.923129 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:16:20.923137 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:16:20.923144 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:16:20.923151 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:16:20.923160 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:16:20.923167 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:16:20.923174 kernel: kvm [1]: HYP mode not available Jan 29 11:16:20.923181 kernel: Initialise system trusted keyrings Jan 29 11:16:20.923188 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:16:20.923195 kernel: Key type asymmetric registered Jan 29 11:16:20.923202 kernel: Asymmetric key parser 'x509' registered Jan 29 11:16:20.923209 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:16:20.923216 kernel: io scheduler mq-deadline registered Jan 29 11:16:20.923225 kernel: io scheduler kyber registered Jan 29 11:16:20.923232 kernel: io scheduler bfq registered Jan 29 11:16:20.923239 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:16:20.923246 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:16:20.923253 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:16:20.923316 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 11:16:20.923326 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:16:20.923333 kernel: thunder_xcv, ver 1.0 Jan 29 11:16:20.923340 kernel: thunder_bgx, ver 1.0 Jan 29 11:16:20.923349 kernel: nicpf, ver 1.0 Jan 29 11:16:20.923356 kernel: nicvf, ver 1.0 Jan 29 11:16:20.923426 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:16:20.923486 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:16:20 UTC (1738149380) Jan 29 11:16:20.923495 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:16:20.923503 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:16:20.923514 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:16:20.923538 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:16:20.923547 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:16:20.923555 kernel: Segment Routing with IPv6 Jan 29 11:16:20.923565 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:16:20.923573 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:16:20.923580 kernel: Key type dns_resolver registered Jan 29 11:16:20.923587 kernel: registered taskstats version 1 Jan 29 11:16:20.923594 kernel: Loading compiled-in X.509 certificates Jan 29 11:16:20.923601 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f3333311a24aa8c58222f4e98a07eaa1f186ad1a' Jan 29 11:16:20.923608 kernel: Key type .fscrypt registered Jan 29 11:16:20.923616 kernel: Key type fscrypt-provisioning registered Jan 29 11:16:20.923623 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:16:20.923631 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:16:20.923638 kernel: ima: No architecture policies found Jan 29 11:16:20.923645 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:16:20.923652 kernel: clk: Disabling unused clocks Jan 29 11:16:20.923659 kernel: Freeing unused kernel memory: 39680K Jan 29 11:16:20.923666 kernel: Run /init as init process Jan 29 11:16:20.923673 kernel: with arguments: Jan 29 11:16:20.923683 kernel: /init Jan 29 11:16:20.923690 kernel: with environment: Jan 29 11:16:20.923697 kernel: HOME=/ Jan 29 11:16:20.923704 kernel: TERM=linux Jan 29 11:16:20.923711 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:16:20.923720 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:16:20.923729 systemd[1]: Detected virtualization kvm. Jan 29 11:16:20.923737 systemd[1]: Detected architecture arm64. Jan 29 11:16:20.923746 systemd[1]: Running in initrd. Jan 29 11:16:20.923754 systemd[1]: No hostname configured, using default hostname. Jan 29 11:16:20.923761 systemd[1]: Hostname set to . Jan 29 11:16:20.923769 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:16:20.923777 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:16:20.923784 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:16:20.923792 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:16:20.923800 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:16:20.923809 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:16:20.923817 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:16:20.923825 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:16:20.923834 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:16:20.923842 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:16:20.923850 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:16:20.923859 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:16:20.923867 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:16:20.923874 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:16:20.923888 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:16:20.923896 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:16:20.923904 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:16:20.923911 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:16:20.923919 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:16:20.923927 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:16:20.923936 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:16:20.923944 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:16:20.923952 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:16:20.923960 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:16:20.923967 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:16:20.923975 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:16:20.923983 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:16:20.923991 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:16:20.923998 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:16:20.924007 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:16:20.924015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:16:20.924023 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:16:20.924030 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:16:20.924038 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:16:20.924047 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:16:20.924056 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:16:20.924081 systemd-journald[239]: Collecting audit messages is disabled. Jan 29 11:16:20.924101 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:16:20.924110 systemd-journald[239]: Journal started Jan 29 11:16:20.924128 systemd-journald[239]: Runtime Journal (/run/log/journal/72abc533bb154829bed8148b8ed04296) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:16:20.915343 systemd-modules-load[240]: Inserted module 'overlay' Jan 29 11:16:20.927973 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:16:20.928330 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:16:20.932732 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:16:20.932818 kernel: Bridge firewalling registered Jan 29 11:16:20.932679 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 29 11:16:20.933685 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:16:20.935354 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:16:20.939381 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:16:20.941698 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:16:20.957745 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:16:20.958872 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:16:20.960958 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:16:20.964100 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:16:20.966481 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:16:20.968871 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:16:20.977794 dracut-cmdline[277]: dracut-dracut-053 Jan 29 11:16:20.980163 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:16:20.995245 systemd-resolved[279]: Positive Trust Anchors: Jan 29 11:16:20.995320 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:16:20.995351 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:16:20.999957 systemd-resolved[279]: Defaulting to hostname 'linux'. Jan 29 11:16:21.001299 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:16:21.004247 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:16:21.047562 kernel: SCSI subsystem initialized Jan 29 11:16:21.051540 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:16:21.059542 kernel: iscsi: registered transport (tcp) Jan 29 11:16:21.073765 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:16:21.073780 kernel: QLogic iSCSI HBA Driver Jan 29 11:16:21.115312 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:16:21.136244 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:16:21.152727 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:16:21.152772 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:16:21.152794 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:16:21.199550 kernel: raid6: neonx8 gen() 15783 MB/s Jan 29 11:16:21.216541 kernel: raid6: neonx4 gen() 15647 MB/s Jan 29 11:16:21.233537 kernel: raid6: neonx2 gen() 13180 MB/s Jan 29 11:16:21.251539 kernel: raid6: neonx1 gen() 11116 MB/s Jan 29 11:16:21.268538 kernel: raid6: int64x8 gen() 6960 MB/s Jan 29 11:16:21.285543 kernel: raid6: int64x4 gen() 7341 MB/s Jan 29 11:16:21.302540 kernel: raid6: int64x2 gen() 6120 MB/s Jan 29 11:16:21.319650 kernel: raid6: int64x1 gen() 5052 MB/s Jan 29 11:16:21.319679 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s Jan 29 11:16:21.337614 kernel: raid6: .... xor() 11914 MB/s, rmw enabled Jan 29 11:16:21.337632 kernel: raid6: using neon recovery algorithm Jan 29 11:16:21.343032 kernel: xor: measuring software checksum speed Jan 29 11:16:21.343048 kernel: 8regs : 19735 MB/sec Jan 29 11:16:21.343704 kernel: 32regs : 19655 MB/sec Jan 29 11:16:21.344933 kernel: arm64_neon : 26981 MB/sec Jan 29 11:16:21.344945 kernel: xor: using function: arm64_neon (26981 MB/sec) Jan 29 11:16:21.398548 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:16:21.410436 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:16:21.420652 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:16:21.432837 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 29 11:16:21.436017 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:16:21.438866 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:16:21.454490 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jan 29 11:16:21.479462 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:16:21.490702 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:16:21.528373 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:16:21.535827 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:16:21.546690 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:16:21.548887 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:16:21.550126 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:16:21.551233 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:16:21.559659 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:16:21.569854 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:16:21.580545 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 11:16:21.586380 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:16:21.586479 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:16:21.586490 kernel: GPT:9289727 != 19775487 Jan 29 11:16:21.586500 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:16:21.586515 kernel: GPT:9289727 != 19775487 Jan 29 11:16:21.586536 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:16:21.586548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:16:21.584801 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:16:21.584918 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:16:21.588213 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:16:21.589648 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:16:21.589768 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:16:21.591967 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:16:21.601901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:16:21.606618 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (508) Jan 29 11:16:21.612549 kernel: BTRFS: device fsid b5bc7ecc-f31a-46c7-9582-5efca7819025 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (516) Jan 29 11:16:21.616266 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:16:21.619002 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:16:21.623774 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:16:21.632972 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:16:21.634168 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:16:21.640396 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:16:21.654654 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:16:21.656370 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:16:21.662032 disk-uuid[551]: Primary Header is updated. Jan 29 11:16:21.662032 disk-uuid[551]: Secondary Entries is updated. Jan 29 11:16:21.662032 disk-uuid[551]: Secondary Header is updated. Jan 29 11:16:21.667133 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:16:21.673021 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:16:22.676545 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:16:22.677147 disk-uuid[554]: The operation has completed successfully. Jan 29 11:16:22.693854 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:16:22.693960 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:16:22.718722 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:16:22.721274 sh[572]: Success Jan 29 11:16:22.736551 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:16:22.761677 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:16:22.771950 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:16:22.774222 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:16:22.783307 kernel: BTRFS info (device dm-0): first mount of filesystem b5bc7ecc-f31a-46c7-9582-5efca7819025 Jan 29 11:16:22.783340 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:16:22.783351 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:16:22.785171 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:16:22.785188 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:16:22.789663 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:16:22.790690 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:16:22.799668 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:16:22.801146 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:16:22.810193 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:16:22.810237 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:16:22.810248 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:16:22.812555 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:16:22.819115 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:16:22.821555 kernel: BTRFS info (device vda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:16:22.826859 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:16:22.833666 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:16:22.890146 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:16:22.906726 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:16:22.910832 ignition[668]: Ignition 2.20.0 Jan 29 11:16:22.910842 ignition[668]: Stage: fetch-offline Jan 29 11:16:22.910873 ignition[668]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:16:22.910892 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:16:22.911047 ignition[668]: parsed url from cmdline: "" Jan 29 11:16:22.911050 ignition[668]: no config URL provided Jan 29 11:16:22.911055 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:16:22.911063 ignition[668]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:16:22.911087 ignition[668]: op(1): [started] loading QEMU firmware config module Jan 29 11:16:22.911091 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:16:22.917900 ignition[668]: op(1): [finished] loading QEMU firmware config module Jan 29 11:16:22.917918 ignition[668]: QEMU firmware config was not found. Ignoring... Jan 29 11:16:22.928757 systemd-networkd[763]: lo: Link UP Jan 29 11:16:22.928770 systemd-networkd[763]: lo: Gained carrier Jan 29 11:16:22.929491 systemd-networkd[763]: Enumeration completed Jan 29 11:16:22.929591 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:16:22.930170 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:16:22.930173 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:16:22.930862 systemd-networkd[763]: eth0: Link UP Jan 29 11:16:22.930865 systemd-networkd[763]: eth0: Gained carrier Jan 29 11:16:22.930871 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:16:22.931034 systemd[1]: Reached target network.target - Network. Jan 29 11:16:22.965556 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:16:22.965610 ignition[668]: parsing config with SHA512: 81434412e25452f41bedeccd9386ca28aa0eb1686faaa78363e8c3bc182518129405efa04893d86400db34da22aac350bb0a0ca55d67b28a7714fdf93574d3ca Jan 29 11:16:22.971695 unknown[668]: fetched base config from "system" Jan 29 11:16:22.971707 unknown[668]: fetched user config from "qemu" Jan 29 11:16:22.972146 ignition[668]: fetch-offline: fetch-offline passed Jan 29 11:16:22.972220 ignition[668]: Ignition finished successfully Jan 29 11:16:22.973960 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:16:22.977065 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:16:22.981674 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:16:22.991866 ignition[770]: Ignition 2.20.0 Jan 29 11:16:22.991877 ignition[770]: Stage: kargs Jan 29 11:16:22.992046 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:16:22.992056 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:16:22.995905 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:16:22.992945 ignition[770]: kargs: kargs passed Jan 29 11:16:22.992988 ignition[770]: Ignition finished successfully Jan 29 11:16:23.008673 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:16:23.017947 ignition[778]: Ignition 2.20.0 Jan 29 11:16:23.017957 ignition[778]: Stage: disks Jan 29 11:16:23.018113 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:16:23.018122 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:16:23.020077 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:16:23.018979 ignition[778]: disks: disks passed Jan 29 11:16:23.021222 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:16:23.019020 ignition[778]: Ignition finished successfully Jan 29 11:16:23.023002 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:16:23.024926 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:16:23.026350 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:16:23.028210 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:16:23.030345 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:16:23.043106 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:16:23.047324 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:16:23.049394 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:16:23.095549 kernel: EXT4-fs (vda9): mounted filesystem bd47c032-97f4-4b3a-b174-3601de374086 r/w with ordered data mode. Quota mode: none. Jan 29 11:16:23.095940 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:16:23.097129 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:16:23.109594 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:16:23.111163 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:16:23.112498 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:16:23.112543 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:16:23.120295 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) Jan 29 11:16:23.120316 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:16:23.120327 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:16:23.112565 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:16:23.124578 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:16:23.124598 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:16:23.119757 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:16:23.123716 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:16:23.128044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:16:23.165442 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:16:23.169396 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:16:23.173436 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:16:23.176996 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:16:23.249611 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:16:23.260628 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:16:23.262981 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:16:23.267556 kernel: BTRFS info (device vda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:16:23.281621 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:16:23.286029 ignition[912]: INFO : Ignition 2.20.0 Jan 29 11:16:23.286029 ignition[912]: INFO : Stage: mount Jan 29 11:16:23.287562 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:16:23.287562 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:16:23.287562 ignition[912]: INFO : mount: mount passed Jan 29 11:16:23.287562 ignition[912]: INFO : Ignition finished successfully Jan 29 11:16:23.288751 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:16:23.304642 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:16:23.782288 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:16:23.801680 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:16:23.807534 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Jan 29 11:16:23.809596 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:16:23.809618 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:16:23.809635 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:16:23.812530 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:16:23.813751 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:16:23.829499 ignition[944]: INFO : Ignition 2.20.0 Jan 29 11:16:23.829499 ignition[944]: INFO : Stage: files Jan 29 11:16:23.831061 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:16:23.831061 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:16:23.831061 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:16:23.834619 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:16:23.834619 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:16:23.834619 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:16:23.834619 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:16:23.834619 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:16:23.833670 unknown[944]: wrote ssh authorized keys file for user: core Jan 29 11:16:23.841808 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:16:23.841808 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 11:16:24.170983 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:16:24.368240 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:16:24.368240 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:16:24.371842 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 11:16:24.670334 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:16:24.750830 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:16:24.752567 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:16:24.752567 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:16:24.752567 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:16:24.757494 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:16:24.757494 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:16:24.757494 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:16:24.757494 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:16:24.757494 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:16:24.757494 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:16:24.757494 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:16:24.757494 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:16:24.757494 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:16:24.757494 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:16:24.757494 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 11:16:24.943721 systemd-networkd[763]: eth0: Gained IPv6LL Jan 29 11:16:24.994054 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:16:25.160069 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:16:25.160069 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:16:25.163505 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:16:25.163505 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:16:25.163505 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:16:25.163505 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:16:25.163505 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:16:25.163505 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:16:25.163505 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:16:25.163505 ignition[944]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:16:25.182697 ignition[944]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:16:25.186061 ignition[944]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:16:25.187653 ignition[944]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:16:25.187653 ignition[944]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:16:25.187653 ignition[944]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:16:25.187653 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:16:25.187653 ignition[944]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:16:25.187653 ignition[944]: INFO : files: files passed Jan 29 11:16:25.187653 ignition[944]: INFO : Ignition finished successfully Jan 29 11:16:25.188115 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:16:25.201657 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:16:25.204836 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:16:25.207383 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:16:25.208550 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:16:25.212474 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:16:25.215494 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:16:25.215494 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:16:25.218877 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:16:25.219217 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:16:25.221590 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:16:25.229651 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:16:25.247651 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:16:25.248619 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:16:25.249963 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:16:25.251803 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:16:25.253515 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:16:25.254194 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:16:25.268674 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:16:25.278714 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:16:25.285935 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:16:25.287094 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:16:25.289082 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:16:25.290779 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:16:25.290896 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:16:25.293281 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:16:25.294336 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:16:25.296142 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:16:25.297899 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:16:25.299618 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:16:25.301558 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:16:25.303439 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:16:25.305455 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:16:25.307195 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:16:25.309089 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:16:25.310578 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:16:25.310688 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:16:25.312996 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:16:25.314797 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:16:25.316669 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:16:25.320632 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:16:25.321925 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:16:25.322027 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:16:25.324775 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:16:25.324939 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:16:25.326872 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:16:25.328400 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:16:25.328539 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:16:25.330432 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:16:25.332040 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:16:25.333698 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:16:25.333829 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:16:25.335830 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:16:25.335970 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:16:25.337439 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:16:25.337606 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:16:25.339255 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:16:25.339399 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:16:25.352847 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:16:25.354397 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:16:25.355320 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:16:25.355615 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:16:25.357352 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:16:25.357492 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:16:25.366073 ignition[999]: INFO : Ignition 2.20.0 Jan 29 11:16:25.366073 ignition[999]: INFO : Stage: umount Jan 29 11:16:25.366073 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:16:25.366073 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:16:25.366073 ignition[999]: INFO : umount: umount passed Jan 29 11:16:25.366073 ignition[999]: INFO : Ignition finished successfully Jan 29 11:16:25.364753 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:16:25.366544 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:16:25.368776 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:16:25.369193 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:16:25.369278 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:16:25.371000 systemd[1]: Stopped target network.target - Network. Jan 29 11:16:25.372610 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:16:25.372678 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:16:25.374266 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:16:25.374312 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:16:25.376043 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:16:25.376084 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:16:25.377773 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:16:25.377818 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:16:25.379920 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:16:25.381514 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:16:25.385563 systemd-networkd[763]: eth0: DHCPv6 lease lost Jan 29 11:16:25.387316 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:16:25.387421 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:16:25.388896 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:16:25.388929 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:16:25.395625 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:16:25.397258 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:16:25.397313 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:16:25.399161 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:16:25.402010 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:16:25.402135 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:16:25.406302 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:16:25.406378 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:16:25.407833 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:16:25.407877 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:16:25.409815 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:16:25.409857 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:16:25.412173 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:16:25.412312 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:16:25.414596 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:16:25.415559 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:16:25.418054 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:16:25.418115 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:16:25.419273 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:16:25.419311 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:16:25.421076 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:16:25.421127 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:16:25.423761 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:16:25.423809 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:16:25.426719 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:16:25.426767 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:16:25.438664 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:16:25.440295 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:16:25.440354 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:16:25.442281 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:16:25.442323 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:16:25.444546 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:16:25.444588 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:16:25.446660 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:16:25.446703 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:16:25.448848 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:16:25.448948 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:16:25.451807 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:16:25.451901 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:16:25.454293 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:16:25.455912 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:16:25.455971 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:16:25.469645 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:16:25.474865 systemd[1]: Switching root. Jan 29 11:16:25.501343 systemd-journald[239]: Journal stopped Jan 29 11:16:26.231714 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 29 11:16:26.231773 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:16:26.231786 kernel: SELinux: policy capability open_perms=1 Jan 29 11:16:26.231795 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:16:26.231804 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:16:26.231813 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:16:26.231824 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:16:26.231833 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:16:26.231842 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:16:26.231852 kernel: audit: type=1403 audit(1738149385.667:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:16:26.231864 systemd[1]: Successfully loaded SELinux policy in 33.260ms. Jan 29 11:16:26.231894 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.183ms. Jan 29 11:16:26.231906 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:16:26.231918 systemd[1]: Detected virtualization kvm. Jan 29 11:16:26.231928 systemd[1]: Detected architecture arm64. Jan 29 11:16:26.231938 systemd[1]: Detected first boot. Jan 29 11:16:26.231948 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:16:26.231958 zram_generator::config[1046]: No configuration found. Jan 29 11:16:26.231975 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:16:26.231985 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:16:26.231995 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:16:26.232005 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:16:26.232016 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:16:26.232026 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:16:26.232036 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:16:26.232047 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:16:26.232057 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:16:26.232071 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:16:26.232082 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:16:26.232092 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:16:26.232102 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:16:26.232113 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:16:26.232123 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:16:26.232133 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:16:26.232143 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:16:26.232155 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:16:26.232165 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:16:26.232176 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:16:26.232186 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:16:26.232196 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:16:26.232206 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:16:26.232217 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:16:26.232228 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:16:26.232239 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:16:26.232250 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:16:26.232260 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:16:26.232270 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:16:26.232280 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:16:26.232292 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:16:26.232302 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:16:26.232312 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:16:26.232322 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:16:26.232335 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:16:26.232345 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:16:26.232355 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:16:26.232365 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:16:26.232376 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:16:26.232386 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:16:26.232396 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:16:26.232406 systemd[1]: Reached target machines.target - Containers. Jan 29 11:16:26.232416 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:16:26.232429 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:16:26.232439 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:16:26.232449 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:16:26.232459 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:16:26.232470 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:16:26.232480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:16:26.232490 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:16:26.232500 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:16:26.232513 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:16:26.232533 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:16:26.232545 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:16:26.232555 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:16:26.232565 kernel: fuse: init (API version 7.39) Jan 29 11:16:26.232575 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:16:26.232584 kernel: ACPI: bus type drm_connector registered Jan 29 11:16:26.232594 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:16:26.232605 kernel: loop: module loaded Jan 29 11:16:26.232617 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:16:26.232628 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:16:26.232638 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:16:26.232648 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:16:26.232675 systemd-journald[1127]: Collecting audit messages is disabled. Jan 29 11:16:26.232696 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:16:26.232707 systemd[1]: Stopped verity-setup.service. Jan 29 11:16:26.232717 systemd-journald[1127]: Journal started Jan 29 11:16:26.232739 systemd-journald[1127]: Runtime Journal (/run/log/journal/72abc533bb154829bed8148b8ed04296) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:16:26.027203 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:16:26.044963 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:16:26.045310 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:16:26.237202 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:16:26.237803 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:16:26.238944 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:16:26.240176 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:16:26.241244 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:16:26.242464 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:16:26.243678 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:16:26.244814 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:16:26.246249 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:16:26.247707 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:16:26.247838 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:16:26.249213 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:16:26.249357 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:16:26.250773 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:16:26.250922 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:16:26.252193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:16:26.252335 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:16:26.253789 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:16:26.253934 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:16:26.255305 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:16:26.255440 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:16:26.256765 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:16:26.258065 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:16:26.259729 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:16:26.271473 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:16:26.280618 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:16:26.282682 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:16:26.283760 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:16:26.283796 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:16:26.285666 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:16:26.287719 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:16:26.289738 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:16:26.290825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:16:26.292403 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:16:26.294632 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:16:26.295920 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:16:26.298720 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:16:26.303537 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:16:26.304374 systemd-journald[1127]: Time spent on flushing to /var/log/journal/72abc533bb154829bed8148b8ed04296 is 31.945ms for 859 entries. Jan 29 11:16:26.304374 systemd-journald[1127]: System Journal (/var/log/journal/72abc533bb154829bed8148b8ed04296) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:16:26.354405 systemd-journald[1127]: Received client request to flush runtime journal. Jan 29 11:16:26.354481 kernel: loop0: detected capacity change from 0 to 116808 Jan 29 11:16:26.354502 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:16:26.304566 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:16:26.307822 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:16:26.310175 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:16:26.316153 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:16:26.317588 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:16:26.320808 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:16:26.322309 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:16:26.324255 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:16:26.329512 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:16:26.339791 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:16:26.344816 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:16:26.347074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:16:26.355039 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 29 11:16:26.355049 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 29 11:16:26.359159 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:16:26.363343 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:16:26.368545 kernel: loop1: detected capacity change from 0 to 194096 Jan 29 11:16:26.378992 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:16:26.382142 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:16:26.383381 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:16:26.387962 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:16:26.402595 kernel: loop2: detected capacity change from 0 to 113536 Jan 29 11:16:26.403388 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:16:26.412749 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:16:26.424606 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 29 11:16:26.424623 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 29 11:16:26.428498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:16:26.430622 kernel: loop3: detected capacity change from 0 to 116808 Jan 29 11:16:26.435591 kernel: loop4: detected capacity change from 0 to 194096 Jan 29 11:16:26.441550 kernel: loop5: detected capacity change from 0 to 113536 Jan 29 11:16:26.444228 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:16:26.444643 (sd-merge)[1185]: Merged extensions into '/usr'. Jan 29 11:16:26.447698 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:16:26.447712 systemd[1]: Reloading... Jan 29 11:16:26.505568 zram_generator::config[1214]: No configuration found. Jan 29 11:16:26.576302 ldconfig[1152]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:16:26.597207 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:16:26.632930 systemd[1]: Reloading finished in 184 ms. Jan 29 11:16:26.661853 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:16:26.663221 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:16:26.673674 systemd[1]: Starting ensure-sysext.service... Jan 29 11:16:26.675566 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:16:26.688282 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:16:26.688297 systemd[1]: Reloading... Jan 29 11:16:26.694703 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:16:26.695249 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:16:26.696079 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:16:26.696392 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 29 11:16:26.696497 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 29 11:16:26.698831 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:16:26.698934 systemd-tmpfiles[1246]: Skipping /boot Jan 29 11:16:26.705772 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:16:26.705869 systemd-tmpfiles[1246]: Skipping /boot Jan 29 11:16:26.734560 zram_generator::config[1277]: No configuration found. Jan 29 11:16:26.811208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:16:26.846509 systemd[1]: Reloading finished in 157 ms. Jan 29 11:16:26.862041 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:16:26.874970 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:16:26.882782 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:16:26.884961 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:16:26.887574 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:16:26.893608 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:16:26.896858 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:16:26.899764 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:16:26.903015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:16:26.905335 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:16:26.909148 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:16:26.915779 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:16:26.916908 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:16:26.919533 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:16:26.921334 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:16:26.923146 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:16:26.923273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:16:26.924847 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:16:26.924980 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:16:26.926672 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:16:26.926810 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:16:26.933301 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Jan 29 11:16:26.935100 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:16:26.940083 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:16:26.942289 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:16:26.945486 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:16:26.946563 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:16:26.950282 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:16:26.952211 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:16:26.953961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:16:26.954126 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:16:26.957653 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:16:26.959438 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:16:26.959574 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:16:26.961200 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:16:26.961352 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:16:26.963547 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:16:26.971304 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:16:26.979809 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:16:26.984069 systemd[1]: Finished ensure-sysext.service. Jan 29 11:16:26.990920 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:16:26.995033 augenrules[1376]: No rules Jan 29 11:16:27.006711 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:16:27.008772 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:16:27.011785 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:16:27.015141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:16:27.016290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:16:27.018208 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:16:27.021300 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:16:27.022430 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:16:27.022950 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:16:27.023242 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:16:27.024929 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:16:27.025066 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:16:27.026380 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:16:27.026509 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:16:27.027875 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:16:27.028011 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:16:27.029970 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:16:27.030107 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:16:27.040433 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:16:27.041593 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:16:27.041657 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:16:27.056920 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1348) Jan 29 11:16:27.057074 systemd-resolved[1313]: Positive Trust Anchors: Jan 29 11:16:27.057364 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:16:27.057451 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:16:27.065466 systemd-resolved[1313]: Defaulting to hostname 'linux'. Jan 29 11:16:27.067073 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:16:27.068809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:16:27.091107 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:16:27.094722 systemd-networkd[1387]: lo: Link UP Jan 29 11:16:27.094729 systemd-networkd[1387]: lo: Gained carrier Jan 29 11:16:27.095467 systemd-networkd[1387]: Enumeration completed Jan 29 11:16:27.100746 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:16:27.102434 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:16:27.103951 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:16:27.103960 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:16:27.104298 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:16:27.105172 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:16:27.105205 systemd-networkd[1387]: eth0: Link UP Jan 29 11:16:27.105208 systemd-networkd[1387]: eth0: Gained carrier Jan 29 11:16:27.105216 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:16:27.107084 systemd[1]: Reached target network.target - Network. Jan 29 11:16:27.108134 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:16:27.110266 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:16:27.119639 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:16:27.122595 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:16:27.123314 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:16:27.123397 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Jan 29 11:16:27.546611 systemd-timesyncd[1388]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:16:27.546658 systemd-timesyncd[1388]: Initial clock synchronization to Wed 2025-01-29 11:16:27.546524 UTC. Jan 29 11:16:27.547794 systemd-resolved[1313]: Clock change detected. Flushing caches. Jan 29 11:16:27.557494 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:16:27.565552 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:16:27.592281 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:16:27.592683 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:16:27.642934 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:16:27.644392 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:16:27.645524 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:16:27.646667 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:16:27.647908 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:16:27.649391 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:16:27.650533 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:16:27.651757 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:16:27.653005 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:16:27.653045 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:16:27.653938 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:16:27.655621 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:16:27.657980 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:16:27.670297 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:16:27.672439 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:16:27.673939 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:16:27.675094 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:16:27.676043 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:16:27.676990 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:16:27.677019 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:16:27.677924 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:16:27.679917 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:16:27.679993 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:16:27.682512 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:16:27.684460 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:16:27.685685 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:16:27.687623 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:16:27.699279 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:16:27.702009 jq[1422]: false Jan 29 11:16:27.702168 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:16:27.704655 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:16:27.708875 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:16:27.711172 extend-filesystems[1423]: Found loop3 Jan 29 11:16:27.712550 extend-filesystems[1423]: Found loop4 Jan 29 11:16:27.712550 extend-filesystems[1423]: Found loop5 Jan 29 11:16:27.712550 extend-filesystems[1423]: Found vda Jan 29 11:16:27.712550 extend-filesystems[1423]: Found vda1 Jan 29 11:16:27.712550 extend-filesystems[1423]: Found vda2 Jan 29 11:16:27.712550 extend-filesystems[1423]: Found vda3 Jan 29 11:16:27.712550 extend-filesystems[1423]: Found usr Jan 29 11:16:27.712550 extend-filesystems[1423]: Found vda4 Jan 29 11:16:27.712550 extend-filesystems[1423]: Found vda6 Jan 29 11:16:27.712550 extend-filesystems[1423]: Found vda7 Jan 29 11:16:27.712550 extend-filesystems[1423]: Found vda9 Jan 29 11:16:27.712550 extend-filesystems[1423]: Checking size of /dev/vda9 Jan 29 11:16:27.739074 extend-filesystems[1423]: Resized partition /dev/vda9 Jan 29 11:16:27.712794 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:16:27.741083 dbus-daemon[1421]: [system] SELinux support is enabled Jan 29 11:16:27.757067 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:16:27.757094 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1368) Jan 29 11:16:27.757108 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:16:27.713167 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:16:27.714290 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:16:27.762461 jq[1438]: true Jan 29 11:16:27.716174 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:16:27.720783 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:16:27.726488 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:16:27.726634 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:16:27.726901 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:16:27.727032 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:16:27.731195 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:16:27.731351 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:16:27.752782 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:16:27.770214 jq[1447]: true Jan 29 11:16:27.777381 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:16:27.776732 (ntainerd)[1448]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:16:27.789468 update_engine[1437]: I20250129 11:16:27.784750 1437 main.cc:92] Flatcar Update Engine starting Jan 29 11:16:27.789468 update_engine[1437]: I20250129 11:16:27.788798 1437 update_check_scheduler.cc:74] Next update check in 5m11s Jan 29 11:16:27.789621 tar[1443]: linux-arm64/helm Jan 29 11:16:27.784978 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:16:27.785008 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:16:27.787119 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:16:27.787137 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:16:27.789153 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:16:27.790961 extend-filesystems[1446]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:16:27.790961 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:16:27.790961 extend-filesystems[1446]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:16:27.794709 extend-filesystems[1423]: Resized filesystem in /dev/vda9 Jan 29 11:16:27.791208 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:16:27.791936 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:16:27.798634 systemd-logind[1434]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:16:27.801278 systemd-logind[1434]: New seat seat0. Jan 29 11:16:27.809684 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:16:27.810875 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:16:27.826950 bash[1476]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:16:27.833468 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:16:27.835840 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:16:27.861890 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:16:27.975835 containerd[1448]: time="2025-01-29T11:16:27.975745391Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:16:28.005531 containerd[1448]: time="2025-01-29T11:16:28.005477111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:16:28.007298 containerd[1448]: time="2025-01-29T11:16:28.007262951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:16:28.007353 containerd[1448]: time="2025-01-29T11:16:28.007298951Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:16:28.007392 containerd[1448]: time="2025-01-29T11:16:28.007371271Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:16:28.007635 containerd[1448]: time="2025-01-29T11:16:28.007612631Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:16:28.007661 containerd[1448]: time="2025-01-29T11:16:28.007644111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:16:28.007729 containerd[1448]: time="2025-01-29T11:16:28.007710591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:16:28.007729 containerd[1448]: time="2025-01-29T11:16:28.007727431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:16:28.007970 containerd[1448]: time="2025-01-29T11:16:28.007943111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:16:28.007996 containerd[1448]: time="2025-01-29T11:16:28.007968311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:16:28.007996 containerd[1448]: time="2025-01-29T11:16:28.007983271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:16:28.007996 containerd[1448]: time="2025-01-29T11:16:28.007993631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:16:28.008143 containerd[1448]: time="2025-01-29T11:16:28.008121511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:16:28.008440 containerd[1448]: time="2025-01-29T11:16:28.008400791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:16:28.008605 containerd[1448]: time="2025-01-29T11:16:28.008581591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:16:28.008641 containerd[1448]: time="2025-01-29T11:16:28.008604871Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:16:28.008781 containerd[1448]: time="2025-01-29T11:16:28.008753591Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:16:28.008837 containerd[1448]: time="2025-01-29T11:16:28.008820631Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:16:28.012502 containerd[1448]: time="2025-01-29T11:16:28.012466591Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:16:28.012557 containerd[1448]: time="2025-01-29T11:16:28.012521991Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:16:28.012557 containerd[1448]: time="2025-01-29T11:16:28.012537471Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:16:28.012557 containerd[1448]: time="2025-01-29T11:16:28.012553111Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:16:28.012698 containerd[1448]: time="2025-01-29T11:16:28.012579231Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:16:28.012906 containerd[1448]: time="2025-01-29T11:16:28.012883791Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:16:28.013275 containerd[1448]: time="2025-01-29T11:16:28.013254151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:16:28.013419 containerd[1448]: time="2025-01-29T11:16:28.013390271Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:16:28.013446 containerd[1448]: time="2025-01-29T11:16:28.013428071Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:16:28.013446 containerd[1448]: time="2025-01-29T11:16:28.013443031Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:16:28.013480 containerd[1448]: time="2025-01-29T11:16:28.013456751Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:16:28.013480 containerd[1448]: time="2025-01-29T11:16:28.013469111Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:16:28.013513 containerd[1448]: time="2025-01-29T11:16:28.013482431Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:16:28.013513 containerd[1448]: time="2025-01-29T11:16:28.013495591Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:16:28.013513 containerd[1448]: time="2025-01-29T11:16:28.013509431Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:16:28.013570 containerd[1448]: time="2025-01-29T11:16:28.013525591Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:16:28.013570 containerd[1448]: time="2025-01-29T11:16:28.013546351Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:16:28.013570 containerd[1448]: time="2025-01-29T11:16:28.013557951Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:16:28.013624 containerd[1448]: time="2025-01-29T11:16:28.013577511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013624 containerd[1448]: time="2025-01-29T11:16:28.013591591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013624 containerd[1448]: time="2025-01-29T11:16:28.013603871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013624 containerd[1448]: time="2025-01-29T11:16:28.013615631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013717 containerd[1448]: time="2025-01-29T11:16:28.013627591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013717 containerd[1448]: time="2025-01-29T11:16:28.013644551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013717 containerd[1448]: time="2025-01-29T11:16:28.013655831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013717 containerd[1448]: time="2025-01-29T11:16:28.013667391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013717 containerd[1448]: time="2025-01-29T11:16:28.013679471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013717 containerd[1448]: time="2025-01-29T11:16:28.013693591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013717 containerd[1448]: time="2025-01-29T11:16:28.013705991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013717 containerd[1448]: time="2025-01-29T11:16:28.013717151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013866 containerd[1448]: time="2025-01-29T11:16:28.013729191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013866 containerd[1448]: time="2025-01-29T11:16:28.013743671Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:16:28.013866 containerd[1448]: time="2025-01-29T11:16:28.013767271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013866 containerd[1448]: time="2025-01-29T11:16:28.013791711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.013866 containerd[1448]: time="2025-01-29T11:16:28.013803151Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:16:28.013994 containerd[1448]: time="2025-01-29T11:16:28.013977711Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:16:28.014173 containerd[1448]: time="2025-01-29T11:16:28.014150551Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:16:28.014207 containerd[1448]: time="2025-01-29T11:16:28.014175351Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:16:28.014207 containerd[1448]: time="2025-01-29T11:16:28.014190351Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:16:28.014207 containerd[1448]: time="2025-01-29T11:16:28.014200511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.014258 containerd[1448]: time="2025-01-29T11:16:28.014212551Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:16:28.014258 containerd[1448]: time="2025-01-29T11:16:28.014222711Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:16:28.014258 containerd[1448]: time="2025-01-29T11:16:28.014234111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:16:28.014920 containerd[1448]: time="2025-01-29T11:16:28.014861911Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:16:28.015029 containerd[1448]: time="2025-01-29T11:16:28.014925991Z" level=info msg="Connect containerd service" Jan 29 11:16:28.015029 containerd[1448]: time="2025-01-29T11:16:28.014960631Z" level=info msg="using legacy CRI server" Jan 29 11:16:28.015029 containerd[1448]: time="2025-01-29T11:16:28.014966871Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:16:28.016000 containerd[1448]: time="2025-01-29T11:16:28.015308671Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:16:28.016467 containerd[1448]: time="2025-01-29T11:16:28.016427551Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:16:28.017125 containerd[1448]: time="2025-01-29T11:16:28.016743791Z" level=info msg="Start subscribing containerd event" Jan 29 11:16:28.017125 containerd[1448]: time="2025-01-29T11:16:28.016809031Z" level=info msg="Start recovering state" Jan 29 11:16:28.017125 containerd[1448]: time="2025-01-29T11:16:28.016873591Z" level=info msg="Start event monitor" Jan 29 11:16:28.017125 containerd[1448]: time="2025-01-29T11:16:28.016883831Z" level=info msg="Start snapshots syncer" Jan 29 11:16:28.017125 containerd[1448]: time="2025-01-29T11:16:28.016893031Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:16:28.017125 containerd[1448]: time="2025-01-29T11:16:28.016909831Z" level=info msg="Start streaming server" Jan 29 11:16:28.017125 containerd[1448]: time="2025-01-29T11:16:28.017005311Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:16:28.017125 containerd[1448]: time="2025-01-29T11:16:28.017046191Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:16:28.017182 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:16:28.021997 containerd[1448]: time="2025-01-29T11:16:28.021965671Z" level=info msg="containerd successfully booted in 0.047083s" Jan 29 11:16:28.024121 sshd_keygen[1442]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:16:28.045450 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:16:28.056722 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:16:28.063537 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:16:28.063884 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:16:28.078646 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:16:28.089370 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:16:28.092079 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:16:28.094259 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:16:28.095665 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:16:28.146259 tar[1443]: linux-arm64/LICENSE Jan 29 11:16:28.146492 tar[1443]: linux-arm64/README.md Jan 29 11:16:28.157606 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:16:29.269519 systemd-networkd[1387]: eth0: Gained IPv6LL Jan 29 11:16:29.272058 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:16:29.273902 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:16:29.282645 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:16:29.284935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:16:29.286958 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:16:29.301268 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:16:29.301581 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:16:29.303621 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:16:29.307517 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:16:29.764375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:16:29.765969 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:16:29.767472 systemd[1]: Startup finished in 555ms (kernel) + 4.957s (initrd) + 3.720s (userspace) = 9.233s. Jan 29 11:16:29.768400 (kubelet)[1532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:16:30.213207 kubelet[1532]: E0129 11:16:30.213153 1532 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:16:30.216021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:16:30.216166 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:16:33.845078 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:16:33.846154 systemd[1]: Started sshd@0-10.0.0.135:22-10.0.0.1:38450.service - OpenSSH per-connection server daemon (10.0.0.1:38450). Jan 29 11:16:33.917377 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 38450 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:16:33.918997 sshd-session[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:33.928676 systemd-logind[1434]: New session 1 of user core. Jan 29 11:16:33.929986 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:16:33.939654 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:16:33.950256 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:16:33.954556 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:16:33.962648 (systemd)[1551]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:16:34.032849 systemd[1551]: Queued start job for default target default.target. Jan 29 11:16:34.042285 systemd[1551]: Created slice app.slice - User Application Slice. Jan 29 11:16:34.042329 systemd[1551]: Reached target paths.target - Paths. Jan 29 11:16:34.042340 systemd[1551]: Reached target timers.target - Timers. Jan 29 11:16:34.043532 systemd[1551]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:16:34.053196 systemd[1551]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:16:34.053255 systemd[1551]: Reached target sockets.target - Sockets. Jan 29 11:16:34.053266 systemd[1551]: Reached target basic.target - Basic System. Jan 29 11:16:34.053299 systemd[1551]: Reached target default.target - Main User Target. Jan 29 11:16:34.053325 systemd[1551]: Startup finished in 85ms. Jan 29 11:16:34.053662 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:16:34.055069 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:16:34.113018 systemd[1]: Started sshd@1-10.0.0.135:22-10.0.0.1:38456.service - OpenSSH per-connection server daemon (10.0.0.1:38456). Jan 29 11:16:34.158242 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 38456 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:16:34.159541 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:34.163997 systemd-logind[1434]: New session 2 of user core. Jan 29 11:16:34.171617 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:16:34.223320 sshd[1564]: Connection closed by 10.0.0.1 port 38456 Jan 29 11:16:34.222976 sshd-session[1562]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:34.236421 systemd[1]: sshd@1-10.0.0.135:22-10.0.0.1:38456.service: Deactivated successfully. Jan 29 11:16:34.237603 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:16:34.239567 systemd-logind[1434]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:16:34.239961 systemd[1]: Started sshd@2-10.0.0.135:22-10.0.0.1:38466.service - OpenSSH per-connection server daemon (10.0.0.1:38466). Jan 29 11:16:34.241106 systemd-logind[1434]: Removed session 2. Jan 29 11:16:34.283621 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 38466 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:16:34.284631 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:34.288104 systemd-logind[1434]: New session 3 of user core. Jan 29 11:16:34.296597 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:16:34.348458 sshd[1571]: Connection closed by 10.0.0.1 port 38466 Jan 29 11:16:34.348877 sshd-session[1569]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:34.361903 systemd[1]: sshd@2-10.0.0.135:22-10.0.0.1:38466.service: Deactivated successfully. Jan 29 11:16:34.363253 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:16:34.364451 systemd-logind[1434]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:16:34.365554 systemd[1]: Started sshd@3-10.0.0.135:22-10.0.0.1:38474.service - OpenSSH per-connection server daemon (10.0.0.1:38474). Jan 29 11:16:34.366305 systemd-logind[1434]: Removed session 3. Jan 29 11:16:34.410347 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 38474 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:16:34.411397 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:34.414977 systemd-logind[1434]: New session 4 of user core. Jan 29 11:16:34.425533 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:16:34.475877 sshd[1578]: Connection closed by 10.0.0.1 port 38474 Jan 29 11:16:34.476236 sshd-session[1576]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:34.484536 systemd[1]: sshd@3-10.0.0.135:22-10.0.0.1:38474.service: Deactivated successfully. Jan 29 11:16:34.486617 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:16:34.487689 systemd-logind[1434]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:16:34.488648 systemd[1]: Started sshd@4-10.0.0.135:22-10.0.0.1:38476.service - OpenSSH per-connection server daemon (10.0.0.1:38476). Jan 29 11:16:34.489313 systemd-logind[1434]: Removed session 4. Jan 29 11:16:34.532065 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 38476 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:16:34.533041 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:34.536355 systemd-logind[1434]: New session 5 of user core. Jan 29 11:16:34.553541 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:16:34.611353 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:16:34.611635 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:16:34.625106 sudo[1586]: pam_unix(sudo:session): session closed for user root Jan 29 11:16:34.626277 sshd[1585]: Connection closed by 10.0.0.1 port 38476 Jan 29 11:16:34.626595 sshd-session[1583]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:34.639522 systemd[1]: sshd@4-10.0.0.135:22-10.0.0.1:38476.service: Deactivated successfully. Jan 29 11:16:34.642561 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:16:34.643757 systemd-logind[1434]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:16:34.657639 systemd[1]: Started sshd@5-10.0.0.135:22-10.0.0.1:38488.service - OpenSSH per-connection server daemon (10.0.0.1:38488). Jan 29 11:16:34.658454 systemd-logind[1434]: Removed session 5. Jan 29 11:16:34.698254 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 38488 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:16:34.699288 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:34.702847 systemd-logind[1434]: New session 6 of user core. Jan 29 11:16:34.714603 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:16:34.763372 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:16:34.763643 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:16:34.766233 sudo[1595]: pam_unix(sudo:session): session closed for user root Jan 29 11:16:34.770252 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:16:34.770510 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:16:34.790648 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:16:34.811378 augenrules[1617]: No rules Jan 29 11:16:34.812380 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:16:34.812591 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:16:34.813444 sudo[1594]: pam_unix(sudo:session): session closed for user root Jan 29 11:16:34.814383 sshd[1593]: Connection closed by 10.0.0.1 port 38488 Jan 29 11:16:34.814715 sshd-session[1591]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:34.827378 systemd[1]: sshd@5-10.0.0.135:22-10.0.0.1:38488.service: Deactivated successfully. Jan 29 11:16:34.828544 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:16:34.830326 systemd-logind[1434]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:16:34.831331 systemd[1]: Started sshd@6-10.0.0.135:22-10.0.0.1:38496.service - OpenSSH per-connection server daemon (10.0.0.1:38496). Jan 29 11:16:34.831971 systemd-logind[1434]: Removed session 6. Jan 29 11:16:34.875527 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 38496 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:16:34.876488 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:34.879401 systemd-logind[1434]: New session 7 of user core. Jan 29 11:16:34.891534 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:16:34.940330 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:16:34.940593 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:16:35.239621 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:16:35.239761 (dockerd)[1648]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:16:35.486446 dockerd[1648]: time="2025-01-29T11:16:35.486165631Z" level=info msg="Starting up" Jan 29 11:16:35.626467 dockerd[1648]: time="2025-01-29T11:16:35.626425631Z" level=info msg="Loading containers: start." Jan 29 11:16:35.760431 kernel: Initializing XFRM netlink socket Jan 29 11:16:35.819181 systemd-networkd[1387]: docker0: Link UP Jan 29 11:16:35.851515 dockerd[1648]: time="2025-01-29T11:16:35.851431671Z" level=info msg="Loading containers: done." Jan 29 11:16:35.865164 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1594281272-merged.mount: Deactivated successfully. Jan 29 11:16:35.867437 dockerd[1648]: time="2025-01-29T11:16:35.867353551Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:16:35.867521 dockerd[1648]: time="2025-01-29T11:16:35.867461191Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:16:35.867584 dockerd[1648]: time="2025-01-29T11:16:35.867556151Z" level=info msg="Daemon has completed initialization" Jan 29 11:16:35.892283 dockerd[1648]: time="2025-01-29T11:16:35.892136791Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:16:35.892307 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:16:36.487373 containerd[1448]: time="2025-01-29T11:16:36.487323951Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 11:16:37.325334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521558969.mount: Deactivated successfully. Jan 29 11:16:38.614756 containerd[1448]: time="2025-01-29T11:16:38.614687791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:38.616265 containerd[1448]: time="2025-01-29T11:16:38.616154871Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937" Jan 29 11:16:38.617240 containerd[1448]: time="2025-01-29T11:16:38.617204471Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:38.620341 containerd[1448]: time="2025-01-29T11:16:38.620290111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:38.621434 containerd[1448]: time="2025-01-29T11:16:38.621384471Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.13400704s" Jan 29 11:16:38.621434 containerd[1448]: time="2025-01-29T11:16:38.621433231Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 29 11:16:38.643495 containerd[1448]: time="2025-01-29T11:16:38.643464791Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 11:16:40.286938 containerd[1448]: time="2025-01-29T11:16:40.286879151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:40.287573 containerd[1448]: time="2025-01-29T11:16:40.287526431Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563" Jan 29 11:16:40.288198 containerd[1448]: time="2025-01-29T11:16:40.288164991Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:40.293433 containerd[1448]: time="2025-01-29T11:16:40.293370511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:40.294587 containerd[1448]: time="2025-01-29T11:16:40.294526391Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.65102552s" Jan 29 11:16:40.294587 containerd[1448]: time="2025-01-29T11:16:40.294556791Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 29 11:16:40.312416 containerd[1448]: time="2025-01-29T11:16:40.312379551Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 11:16:40.364872 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:16:40.374618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:16:40.472307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:16:40.475946 (kubelet)[1932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:16:40.515604 kubelet[1932]: E0129 11:16:40.515551 1932 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:16:40.518438 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:16:40.518578 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:16:41.456910 containerd[1448]: time="2025-01-29T11:16:41.456864111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:41.457433 containerd[1448]: time="2025-01-29T11:16:41.457380311Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340" Jan 29 11:16:41.458237 containerd[1448]: time="2025-01-29T11:16:41.458172951Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:41.461806 containerd[1448]: time="2025-01-29T11:16:41.461759111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:41.462911 containerd[1448]: time="2025-01-29T11:16:41.462869431Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.15045556s" Jan 29 11:16:41.462911 containerd[1448]: time="2025-01-29T11:16:41.462907831Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 29 11:16:41.481419 containerd[1448]: time="2025-01-29T11:16:41.481366911Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:16:42.677911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676674408.mount: Deactivated successfully. Jan 29 11:16:43.006925 containerd[1448]: time="2025-01-29T11:16:43.006803391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:43.007854 containerd[1448]: time="2025-01-29T11:16:43.007637671Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 29 11:16:43.008558 containerd[1448]: time="2025-01-29T11:16:43.008495391Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:43.010425 containerd[1448]: time="2025-01-29T11:16:43.010369391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:43.011234 containerd[1448]: time="2025-01-29T11:16:43.011202951Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.52979108s" Jan 29 11:16:43.011301 containerd[1448]: time="2025-01-29T11:16:43.011238111Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 11:16:43.030274 containerd[1448]: time="2025-01-29T11:16:43.030222871Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:16:43.848290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721704334.mount: Deactivated successfully. Jan 29 11:16:44.687297 containerd[1448]: time="2025-01-29T11:16:44.687241791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:44.689304 containerd[1448]: time="2025-01-29T11:16:44.688987711Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 29 11:16:44.690023 containerd[1448]: time="2025-01-29T11:16:44.689989751Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:44.696060 containerd[1448]: time="2025-01-29T11:16:44.696015311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:44.696817 containerd[1448]: time="2025-01-29T11:16:44.696767111Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.66650596s" Jan 29 11:16:44.696872 containerd[1448]: time="2025-01-29T11:16:44.696817551Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 11:16:44.715010 containerd[1448]: time="2025-01-29T11:16:44.714973951Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 11:16:45.265122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2949192410.mount: Deactivated successfully. Jan 29 11:16:45.269204 containerd[1448]: time="2025-01-29T11:16:45.269154871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:45.270422 containerd[1448]: time="2025-01-29T11:16:45.270368671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 29 11:16:45.271163 containerd[1448]: time="2025-01-29T11:16:45.271121871Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:45.273733 containerd[1448]: time="2025-01-29T11:16:45.273699031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:45.274713 containerd[1448]: time="2025-01-29T11:16:45.274584791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 559.57304ms" Jan 29 11:16:45.274713 containerd[1448]: time="2025-01-29T11:16:45.274614631Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 29 11:16:45.292515 containerd[1448]: time="2025-01-29T11:16:45.292489911Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 11:16:45.973160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917700705.mount: Deactivated successfully. Jan 29 11:16:47.737556 containerd[1448]: time="2025-01-29T11:16:47.737495991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:47.738170 containerd[1448]: time="2025-01-29T11:16:47.738125831Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 29 11:16:47.738866 containerd[1448]: time="2025-01-29T11:16:47.738829551Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:47.741899 containerd[1448]: time="2025-01-29T11:16:47.741868311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:47.743257 containerd[1448]: time="2025-01-29T11:16:47.743223871Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.45070196s" Jan 29 11:16:47.743257 containerd[1448]: time="2025-01-29T11:16:47.743256311Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 29 11:16:50.614829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:16:50.629648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:16:50.748168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:16:50.751528 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:16:50.786764 kubelet[2156]: E0129 11:16:50.786680 2156 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:16:50.789374 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:16:50.789533 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:16:51.859906 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:16:51.871903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:16:51.887923 systemd[1]: Reloading requested from client PID 2171 ('systemctl') (unit session-7.scope)... Jan 29 11:16:51.887938 systemd[1]: Reloading... Jan 29 11:16:51.955514 zram_generator::config[2210]: No configuration found. Jan 29 11:16:52.098442 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:16:52.149232 systemd[1]: Reloading finished in 261 ms. Jan 29 11:16:52.203136 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:16:52.205540 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:16:52.205725 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:16:52.207180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:16:52.350773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:16:52.354923 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:16:52.397063 kubelet[2257]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:16:52.397063 kubelet[2257]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:16:52.397063 kubelet[2257]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:16:52.397393 kubelet[2257]: I0129 11:16:52.397095 2257 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:16:52.942873 kubelet[2257]: I0129 11:16:52.942838 2257 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:16:52.944904 kubelet[2257]: I0129 11:16:52.943046 2257 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:16:52.944904 kubelet[2257]: I0129 11:16:52.943289 2257 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:16:52.983730 kubelet[2257]: I0129 11:16:52.983692 2257 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:16:52.983837 kubelet[2257]: E0129 11:16:52.983773 2257 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:52.991211 kubelet[2257]: I0129 11:16:52.991170 2257 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:16:52.992478 kubelet[2257]: I0129 11:16:52.992434 2257 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:16:52.992644 kubelet[2257]: I0129 11:16:52.992480 2257 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:16:52.992728 kubelet[2257]: I0129 11:16:52.992700 2257 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:16:52.992728 kubelet[2257]: I0129 11:16:52.992708 2257 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:16:52.992986 kubelet[2257]: I0129 11:16:52.992962 2257 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:16:52.995597 kubelet[2257]: I0129 11:16:52.995557 2257 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:16:52.995597 kubelet[2257]: I0129 11:16:52.995578 2257 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:16:52.995708 kubelet[2257]: I0129 11:16:52.995703 2257 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:16:52.996304 kubelet[2257]: I0129 11:16:52.995893 2257 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:16:52.996553 kubelet[2257]: W0129 11:16:52.996446 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:52.996553 kubelet[2257]: W0129 11:16:52.996480 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:52.996553 kubelet[2257]: E0129 11:16:52.996532 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:52.996553 kubelet[2257]: E0129 11:16:52.996515 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:52.997003 kubelet[2257]: I0129 11:16:52.996973 2257 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:16:52.997328 kubelet[2257]: I0129 11:16:52.997316 2257 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:16:52.997520 kubelet[2257]: W0129 11:16:52.997508 2257 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:16:52.998384 kubelet[2257]: I0129 11:16:52.998284 2257 server.go:1264] "Started kubelet" Jan 29 11:16:52.999773 kubelet[2257]: I0129 11:16:52.999575 2257 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:16:53.002590 kubelet[2257]: E0129 11:16:53.000446 2257 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.135:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.135:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f25ae9850a247 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:16:52.998259271 +0000 UTC m=+0.640397761,LastTimestamp:2025-01-29 11:16:52.998259271 +0000 UTC m=+0.640397761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:16:53.002590 kubelet[2257]: I0129 11:16:53.000932 2257 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:16:53.002590 kubelet[2257]: I0129 11:16:53.002011 2257 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:16:53.003589 kubelet[2257]: I0129 11:16:53.003525 2257 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:16:53.004104 kubelet[2257]: I0129 11:16:53.004065 2257 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:16:53.005290 kubelet[2257]: I0129 11:16:53.004567 2257 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:16:53.005290 kubelet[2257]: W0129 11:16:53.005119 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:53.005290 kubelet[2257]: E0129 11:16:53.005162 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:53.005290 kubelet[2257]: I0129 11:16:53.005172 2257 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:16:53.005478 kubelet[2257]: I0129 11:16:53.005306 2257 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:16:53.006528 kubelet[2257]: E0129 11:16:53.006477 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="200ms" Jan 29 11:16:53.008109 kubelet[2257]: E0129 11:16:53.008054 2257 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:16:53.008597 kubelet[2257]: I0129 11:16:53.008576 2257 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:16:53.008684 kubelet[2257]: I0129 11:16:53.008674 2257 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:16:53.008821 kubelet[2257]: I0129 11:16:53.008797 2257 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:16:53.015018 kubelet[2257]: I0129 11:16:53.014971 2257 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:16:53.016464 kubelet[2257]: I0129 11:16:53.016159 2257 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:16:53.016464 kubelet[2257]: I0129 11:16:53.016312 2257 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:16:53.016464 kubelet[2257]: I0129 11:16:53.016330 2257 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:16:53.016464 kubelet[2257]: E0129 11:16:53.016369 2257 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:16:53.021621 kubelet[2257]: W0129 11:16:53.021577 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:53.021621 kubelet[2257]: E0129 11:16:53.021618 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:53.023207 kubelet[2257]: I0129 11:16:53.023021 2257 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:16:53.023207 kubelet[2257]: I0129 11:16:53.023143 2257 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:16:53.023207 kubelet[2257]: I0129 11:16:53.023162 2257 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:16:53.024974 kubelet[2257]: I0129 11:16:53.024946 2257 policy_none.go:49] "None policy: Start" Jan 29 11:16:53.025514 kubelet[2257]: I0129 11:16:53.025495 2257 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:16:53.025573 kubelet[2257]: I0129 11:16:53.025528 2257 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:16:53.031186 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:16:53.045357 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:16:53.048235 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:16:53.063206 kubelet[2257]: I0129 11:16:53.063177 2257 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:16:53.063461 kubelet[2257]: I0129 11:16:53.063398 2257 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:16:53.063567 kubelet[2257]: I0129 11:16:53.063538 2257 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:16:53.064675 kubelet[2257]: E0129 11:16:53.064641 2257 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:16:53.105415 kubelet[2257]: I0129 11:16:53.105353 2257 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:16:53.105764 kubelet[2257]: E0129 11:16:53.105722 2257 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Jan 29 11:16:53.116914 kubelet[2257]: I0129 11:16:53.116846 2257 topology_manager.go:215] "Topology Admit Handler" podUID="e309fa4c32b83881107e9d1036526175" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:16:53.117918 kubelet[2257]: I0129 11:16:53.117894 2257 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:16:53.118723 kubelet[2257]: I0129 11:16:53.118674 2257 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:16:53.124112 systemd[1]: Created slice kubepods-burstable-pode309fa4c32b83881107e9d1036526175.slice - libcontainer container kubepods-burstable-pode309fa4c32b83881107e9d1036526175.slice. Jan 29 11:16:53.145048 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 29 11:16:53.160504 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 29 11:16:53.206916 kubelet[2257]: I0129 11:16:53.206353 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e309fa4c32b83881107e9d1036526175-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e309fa4c32b83881107e9d1036526175\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:16:53.206916 kubelet[2257]: I0129 11:16:53.206397 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:16:53.206916 kubelet[2257]: I0129 11:16:53.206430 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:16:53.206916 kubelet[2257]: I0129 11:16:53.206447 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:16:53.206916 kubelet[2257]: I0129 11:16:53.206464 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:16:53.207064 kubelet[2257]: I0129 11:16:53.206480 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:16:53.207064 kubelet[2257]: I0129 11:16:53.206497 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e309fa4c32b83881107e9d1036526175-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e309fa4c32b83881107e9d1036526175\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:16:53.207064 kubelet[2257]: I0129 11:16:53.206512 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e309fa4c32b83881107e9d1036526175-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e309fa4c32b83881107e9d1036526175\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:16:53.207064 kubelet[2257]: E0129 11:16:53.206867 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="400ms" Jan 29 11:16:53.207587 kubelet[2257]: I0129 11:16:53.207558 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:16:53.307011 kubelet[2257]: I0129 11:16:53.306983 2257 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:16:53.307277 kubelet[2257]: E0129 11:16:53.307243 2257 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Jan 29 11:16:53.443697 kubelet[2257]: E0129 11:16:53.443663 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:53.444355 containerd[1448]: time="2025-01-29T11:16:53.444308631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e309fa4c32b83881107e9d1036526175,Namespace:kube-system,Attempt:0,}" Jan 29 11:16:53.458977 kubelet[2257]: E0129 11:16:53.458863 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:53.459284 containerd[1448]: time="2025-01-29T11:16:53.459228591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 11:16:53.462671 kubelet[2257]: E0129 11:16:53.462578 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:53.463151 containerd[1448]: time="2025-01-29T11:16:53.462887031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 11:16:53.607464 kubelet[2257]: E0129 11:16:53.607402 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="800ms" Jan 29 11:16:53.709070 kubelet[2257]: I0129 11:16:53.708990 2257 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:16:53.709405 kubelet[2257]: E0129 11:16:53.709379 2257 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Jan 29 11:16:54.014341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1856396738.mount: Deactivated successfully. Jan 29 11:16:54.019006 containerd[1448]: time="2025-01-29T11:16:54.018943631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:16:54.020943 containerd[1448]: time="2025-01-29T11:16:54.020897231Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 11:16:54.021464 containerd[1448]: time="2025-01-29T11:16:54.021431231Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:16:54.023347 containerd[1448]: time="2025-01-29T11:16:54.023317351Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:16:54.024679 containerd[1448]: time="2025-01-29T11:16:54.024635111Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:16:54.025418 containerd[1448]: time="2025-01-29T11:16:54.025371511Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:16:54.026123 containerd[1448]: time="2025-01-29T11:16:54.026083191Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:16:54.027204 containerd[1448]: time="2025-01-29T11:16:54.027169631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:16:54.028175 containerd[1448]: time="2025-01-29T11:16:54.028137791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 583.74956ms" Jan 29 11:16:54.032355 containerd[1448]: time="2025-01-29T11:16:54.032205031Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.74688ms" Jan 29 11:16:54.033031 containerd[1448]: time="2025-01-29T11:16:54.033008431Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 570.06688ms" Jan 29 11:16:54.115747 kubelet[2257]: W0129 11:16:54.115607 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:54.115747 kubelet[2257]: E0129 11:16:54.115688 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:54.146516 kubelet[2257]: W0129 11:16:54.146458 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:54.146516 kubelet[2257]: E0129 11:16:54.146496 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:54.190295 containerd[1448]: time="2025-01-29T11:16:54.190186031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:16:54.190295 containerd[1448]: time="2025-01-29T11:16:54.190288551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:16:54.190457 containerd[1448]: time="2025-01-29T11:16:54.190305231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:16:54.191086 containerd[1448]: time="2025-01-29T11:16:54.190868311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:16:54.191086 containerd[1448]: time="2025-01-29T11:16:54.190926591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:16:54.191086 containerd[1448]: time="2025-01-29T11:16:54.190937711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:16:54.191192 containerd[1448]: time="2025-01-29T11:16:54.191073591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:16:54.191495 containerd[1448]: time="2025-01-29T11:16:54.191379991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:16:54.191927 containerd[1448]: time="2025-01-29T11:16:54.191844671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:16:54.191927 containerd[1448]: time="2025-01-29T11:16:54.191873351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:16:54.192155 containerd[1448]: time="2025-01-29T11:16:54.192016671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:16:54.193877 containerd[1448]: time="2025-01-29T11:16:54.193796911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:16:54.214572 systemd[1]: Started cri-containerd-4a9a9e36898b0de50c5d704bd862da8e340e721684347a85c14f765016ba703b.scope - libcontainer container 4a9a9e36898b0de50c5d704bd862da8e340e721684347a85c14f765016ba703b. Jan 29 11:16:54.215762 systemd[1]: Started cri-containerd-fa02795d83d1799db4c9ec9242cb35991f9b5bff34086bd4d2289e9693c34153.scope - libcontainer container fa02795d83d1799db4c9ec9242cb35991f9b5bff34086bd4d2289e9693c34153. Jan 29 11:16:54.219439 systemd[1]: Started cri-containerd-3bb219616e740fd38dbe0f5cbb5e19bd8cdd3d1c813283abf0d5ee97cecf6a16.scope - libcontainer container 3bb219616e740fd38dbe0f5cbb5e19bd8cdd3d1c813283abf0d5ee97cecf6a16. Jan 29 11:16:54.246334 containerd[1448]: time="2025-01-29T11:16:54.246280791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a9a9e36898b0de50c5d704bd862da8e340e721684347a85c14f765016ba703b\"" Jan 29 11:16:54.249754 containerd[1448]: time="2025-01-29T11:16:54.249709231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa02795d83d1799db4c9ec9242cb35991f9b5bff34086bd4d2289e9693c34153\"" Jan 29 11:16:54.249855 kubelet[2257]: E0129 11:16:54.249757 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:54.251336 kubelet[2257]: E0129 11:16:54.251301 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:54.253225 containerd[1448]: time="2025-01-29T11:16:54.253134071Z" level=info msg="CreateContainer within sandbox \"4a9a9e36898b0de50c5d704bd862da8e340e721684347a85c14f765016ba703b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:16:54.254595 containerd[1448]: time="2025-01-29T11:16:54.254385911Z" level=info msg="CreateContainer within sandbox \"fa02795d83d1799db4c9ec9242cb35991f9b5bff34086bd4d2289e9693c34153\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:16:54.254595 containerd[1448]: time="2025-01-29T11:16:54.254585711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e309fa4c32b83881107e9d1036526175,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bb219616e740fd38dbe0f5cbb5e19bd8cdd3d1c813283abf0d5ee97cecf6a16\"" Jan 29 11:16:54.255359 kubelet[2257]: E0129 11:16:54.255340 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:54.257997 containerd[1448]: time="2025-01-29T11:16:54.257941551Z" level=info msg="CreateContainer within sandbox \"3bb219616e740fd38dbe0f5cbb5e19bd8cdd3d1c813283abf0d5ee97cecf6a16\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:16:54.272153 containerd[1448]: time="2025-01-29T11:16:54.272002311Z" level=info msg="CreateContainer within sandbox \"4a9a9e36898b0de50c5d704bd862da8e340e721684347a85c14f765016ba703b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"28dc2c603344c03dd9ca2c5e3d2deb54342fcba2e1d772891471c64453030771\"" Jan 29 11:16:54.273215 containerd[1448]: time="2025-01-29T11:16:54.272961711Z" level=info msg="CreateContainer within sandbox \"fa02795d83d1799db4c9ec9242cb35991f9b5bff34086bd4d2289e9693c34153\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7ac5fd6ccb3d6495aff2be02c442caf9024d6c90122016d93ddcdb50c76a49c7\"" Jan 29 11:16:54.273215 containerd[1448]: time="2025-01-29T11:16:54.272968511Z" level=info msg="StartContainer for \"28dc2c603344c03dd9ca2c5e3d2deb54342fcba2e1d772891471c64453030771\"" Jan 29 11:16:54.273420 containerd[1448]: time="2025-01-29T11:16:54.273382191Z" level=info msg="StartContainer for \"7ac5fd6ccb3d6495aff2be02c442caf9024d6c90122016d93ddcdb50c76a49c7\"" Jan 29 11:16:54.276780 containerd[1448]: time="2025-01-29T11:16:54.276736671Z" level=info msg="CreateContainer within sandbox \"3bb219616e740fd38dbe0f5cbb5e19bd8cdd3d1c813283abf0d5ee97cecf6a16\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7da3db85cd84d2c1c16c61789fcc4f027a8852db64a28682b7968404eb5f1cf8\"" Jan 29 11:16:54.277432 containerd[1448]: time="2025-01-29T11:16:54.277136231Z" level=info msg="StartContainer for \"7da3db85cd84d2c1c16c61789fcc4f027a8852db64a28682b7968404eb5f1cf8\"" Jan 29 11:16:54.306562 systemd[1]: Started cri-containerd-28dc2c603344c03dd9ca2c5e3d2deb54342fcba2e1d772891471c64453030771.scope - libcontainer container 28dc2c603344c03dd9ca2c5e3d2deb54342fcba2e1d772891471c64453030771. Jan 29 11:16:54.307599 systemd[1]: Started cri-containerd-7ac5fd6ccb3d6495aff2be02c442caf9024d6c90122016d93ddcdb50c76a49c7.scope - libcontainer container 7ac5fd6ccb3d6495aff2be02c442caf9024d6c90122016d93ddcdb50c76a49c7. Jan 29 11:16:54.312969 kubelet[2257]: W0129 11:16:54.312870 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:54.312969 kubelet[2257]: E0129 11:16:54.312942 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Jan 29 11:16:54.313300 systemd[1]: Started cri-containerd-7da3db85cd84d2c1c16c61789fcc4f027a8852db64a28682b7968404eb5f1cf8.scope - libcontainer container 7da3db85cd84d2c1c16c61789fcc4f027a8852db64a28682b7968404eb5f1cf8. Jan 29 11:16:54.342900 containerd[1448]: time="2025-01-29T11:16:54.342815551Z" level=info msg="StartContainer for \"28dc2c603344c03dd9ca2c5e3d2deb54342fcba2e1d772891471c64453030771\" returns successfully" Jan 29 11:16:54.368729 containerd[1448]: time="2025-01-29T11:16:54.368637231Z" level=info msg="StartContainer for \"7ac5fd6ccb3d6495aff2be02c442caf9024d6c90122016d93ddcdb50c76a49c7\" returns successfully" Jan 29 11:16:54.368729 containerd[1448]: time="2025-01-29T11:16:54.368718951Z" level=info msg="StartContainer for \"7da3db85cd84d2c1c16c61789fcc4f027a8852db64a28682b7968404eb5f1cf8\" returns successfully" Jan 29 11:16:54.414988 kubelet[2257]: E0129 11:16:54.408824 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="1.6s" Jan 29 11:16:54.511798 kubelet[2257]: I0129 11:16:54.511473 2257 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:16:54.512341 kubelet[2257]: E0129 11:16:54.512297 2257 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Jan 29 11:16:55.039278 kubelet[2257]: E0129 11:16:55.039248 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:55.039901 kubelet[2257]: E0129 11:16:55.039877 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:55.041555 kubelet[2257]: E0129 11:16:55.041530 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:56.043701 kubelet[2257]: E0129 11:16:56.043670 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:56.113934 kubelet[2257]: I0129 11:16:56.113900 2257 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:16:56.407846 kubelet[2257]: E0129 11:16:56.407791 2257 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:16:56.475031 kubelet[2257]: I0129 11:16:56.474998 2257 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:16:56.998667 kubelet[2257]: I0129 11:16:56.998604 2257 apiserver.go:52] "Watching apiserver" Jan 29 11:16:57.005225 kubelet[2257]: I0129 11:16:57.005179 2257 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:16:57.392877 kubelet[2257]: E0129 11:16:57.392509 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:58.047770 kubelet[2257]: E0129 11:16:58.047739 2257 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:58.311619 systemd[1]: Reloading requested from client PID 2537 ('systemctl') (unit session-7.scope)... Jan 29 11:16:58.311636 systemd[1]: Reloading... Jan 29 11:16:58.378464 zram_generator::config[2579]: No configuration found. Jan 29 11:16:58.456398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:16:58.518458 systemd[1]: Reloading finished in 206 ms. Jan 29 11:16:58.551058 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:16:58.563557 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:16:58.564509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:16:58.581715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:16:58.669466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:16:58.674947 (kubelet)[2618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:16:58.715395 kubelet[2618]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:16:58.716233 kubelet[2618]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:16:58.716233 kubelet[2618]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:16:58.716233 kubelet[2618]: I0129 11:16:58.715848 2618 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:16:58.719672 kubelet[2618]: I0129 11:16:58.719645 2618 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:16:58.719672 kubelet[2618]: I0129 11:16:58.719665 2618 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:16:58.719858 kubelet[2618]: I0129 11:16:58.719811 2618 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:16:58.721238 kubelet[2618]: I0129 11:16:58.721129 2618 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:16:58.722364 kubelet[2618]: I0129 11:16:58.722243 2618 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:16:58.728050 kubelet[2618]: I0129 11:16:58.728031 2618 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:16:58.728265 kubelet[2618]: I0129 11:16:58.728244 2618 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:16:58.728418 kubelet[2618]: I0129 11:16:58.728267 2618 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:16:58.728492 kubelet[2618]: I0129 11:16:58.728434 2618 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:16:58.728492 kubelet[2618]: I0129 11:16:58.728443 2618 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:16:58.728492 kubelet[2618]: I0129 11:16:58.728475 2618 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:16:58.728576 kubelet[2618]: I0129 11:16:58.728563 2618 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:16:58.728600 kubelet[2618]: I0129 11:16:58.728579 2618 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:16:58.728710 kubelet[2618]: I0129 11:16:58.728604 2618 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:16:58.728710 kubelet[2618]: I0129 11:16:58.728616 2618 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:16:58.729548 kubelet[2618]: I0129 11:16:58.729522 2618 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:16:58.730063 kubelet[2618]: I0129 11:16:58.729662 2618 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:16:58.730063 kubelet[2618]: I0129 11:16:58.729990 2618 server.go:1264] "Started kubelet" Jan 29 11:16:58.730537 kubelet[2618]: I0129 11:16:58.730486 2618 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:16:58.732430 kubelet[2618]: I0129 11:16:58.731296 2618 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:16:58.732430 kubelet[2618]: I0129 11:16:58.731524 2618 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:16:58.732897 kubelet[2618]: I0129 11:16:58.732872 2618 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:16:58.733629 kubelet[2618]: I0129 11:16:58.733007 2618 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:16:58.734771 kubelet[2618]: E0129 11:16:58.734750 2618 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:16:58.734823 kubelet[2618]: I0129 11:16:58.734791 2618 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:16:58.735017 kubelet[2618]: I0129 11:16:58.734871 2618 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:16:58.735017 kubelet[2618]: I0129 11:16:58.734989 2618 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:16:58.739279 kubelet[2618]: I0129 11:16:58.739253 2618 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:16:58.739351 kubelet[2618]: I0129 11:16:58.739338 2618 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:16:58.745524 kubelet[2618]: I0129 11:16:58.745490 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:16:58.746842 kubelet[2618]: I0129 11:16:58.746528 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:16:58.746842 kubelet[2618]: I0129 11:16:58.746564 2618 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:16:58.746842 kubelet[2618]: I0129 11:16:58.746579 2618 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:16:58.746842 kubelet[2618]: E0129 11:16:58.746616 2618 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:16:58.747215 kubelet[2618]: E0129 11:16:58.747189 2618 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:16:58.750366 kubelet[2618]: I0129 11:16:58.750333 2618 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:16:58.783719 kubelet[2618]: I0129 11:16:58.783694 2618 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:16:58.783719 kubelet[2618]: I0129 11:16:58.783712 2618 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:16:58.783719 kubelet[2618]: I0129 11:16:58.783730 2618 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:16:58.783881 kubelet[2618]: I0129 11:16:58.783873 2618 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:16:58.783904 kubelet[2618]: I0129 11:16:58.783885 2618 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:16:58.783904 kubelet[2618]: I0129 11:16:58.783901 2618 policy_none.go:49] "None policy: Start" Jan 29 11:16:58.784547 kubelet[2618]: I0129 11:16:58.784507 2618 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:16:58.784547 kubelet[2618]: I0129 11:16:58.784535 2618 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:16:58.784671 kubelet[2618]: I0129 11:16:58.784655 2618 state_mem.go:75] "Updated machine memory state" Jan 29 11:16:58.788647 kubelet[2618]: I0129 11:16:58.788617 2618 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:16:58.788823 kubelet[2618]: I0129 11:16:58.788791 2618 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:16:58.788930 kubelet[2618]: I0129 11:16:58.788900 2618 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:16:58.843296 kubelet[2618]: I0129 11:16:58.843192 2618 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:16:58.847227 kubelet[2618]: I0129 11:16:58.847185 2618 topology_manager.go:215] "Topology Admit Handler" podUID="e309fa4c32b83881107e9d1036526175" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:16:58.847324 kubelet[2618]: I0129 11:16:58.847307 2618 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:16:58.847360 kubelet[2618]: I0129 11:16:58.847343 2618 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:16:58.849776 kubelet[2618]: I0129 11:16:58.849712 2618 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 11:16:58.849878 kubelet[2618]: I0129 11:16:58.849860 2618 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:16:58.852880 kubelet[2618]: E0129 11:16:58.852753 2618 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:16:58.939935 kubelet[2618]: I0129 11:16:58.939885 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e309fa4c32b83881107e9d1036526175-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e309fa4c32b83881107e9d1036526175\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:16:58.940083 kubelet[2618]: I0129 11:16:58.940065 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:16:58.940347 kubelet[2618]: I0129 11:16:58.940161 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:16:58.940347 kubelet[2618]: I0129 11:16:58.940188 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:16:58.940347 kubelet[2618]: I0129 11:16:58.940206 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e309fa4c32b83881107e9d1036526175-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e309fa4c32b83881107e9d1036526175\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:16:58.940347 kubelet[2618]: I0129 11:16:58.940221 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e309fa4c32b83881107e9d1036526175-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e309fa4c32b83881107e9d1036526175\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:16:58.940347 kubelet[2618]: I0129 11:16:58.940236 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:16:58.940524 kubelet[2618]: I0129 11:16:58.940252 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:16:58.940524 kubelet[2618]: I0129 11:16:58.940269 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:16:59.154240 kubelet[2618]: E0129 11:16:59.153946 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:59.154240 kubelet[2618]: E0129 11:16:59.154128 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:59.154240 kubelet[2618]: E0129 11:16:59.154136 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:59.315730 sudo[2654]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:16:59.316020 sudo[2654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:16:59.729691 kubelet[2618]: I0129 11:16:59.729641 2618 apiserver.go:52] "Watching apiserver" Jan 29 11:16:59.734141 sudo[2654]: pam_unix(sudo:session): session closed for user root Jan 29 11:16:59.735632 kubelet[2618]: I0129 11:16:59.735611 2618 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:16:59.769998 kubelet[2618]: E0129 11:16:59.769633 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:59.769998 kubelet[2618]: E0129 11:16:59.769922 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:59.774846 kubelet[2618]: E0129 11:16:59.774383 2618 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:16:59.774846 kubelet[2618]: E0129 11:16:59.774793 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:16:59.793586 kubelet[2618]: I0129 11:16:59.793530 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.793516511 podStartE2EDuration="1.793516511s" podCreationTimestamp="2025-01-29 11:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:16:59.786506951 +0000 UTC m=+1.107123361" watchObservedRunningTime="2025-01-29 11:16:59.793516511 +0000 UTC m=+1.114132921" Jan 29 11:16:59.800486 kubelet[2618]: I0129 11:16:59.799949 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.799936351 podStartE2EDuration="2.799936351s" podCreationTimestamp="2025-01-29 11:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:16:59.793923551 +0000 UTC m=+1.114539961" watchObservedRunningTime="2025-01-29 11:16:59.799936351 +0000 UTC m=+1.120552761" Jan 29 11:16:59.800486 kubelet[2618]: I0129 11:16:59.800033 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8000280709999998 podStartE2EDuration="1.800028071s" podCreationTimestamp="2025-01-29 11:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:16:59.799724271 +0000 UTC m=+1.120340681" watchObservedRunningTime="2025-01-29 11:16:59.800028071 +0000 UTC m=+1.120644481" Jan 29 11:17:00.770118 kubelet[2618]: E0129 11:17:00.770077 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:00.771522 kubelet[2618]: E0129 11:17:00.771504 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:01.688733 sudo[1628]: pam_unix(sudo:session): session closed for user root Jan 29 11:17:01.690162 sshd[1627]: Connection closed by 10.0.0.1 port 38496 Jan 29 11:17:01.690593 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:01.693138 systemd[1]: sshd@6-10.0.0.135:22-10.0.0.1:38496.service: Deactivated successfully. Jan 29 11:17:01.694697 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:17:01.694896 systemd[1]: session-7.scope: Consumed 6.879s CPU time, 192.5M memory peak, 0B memory swap peak. Jan 29 11:17:01.696001 systemd-logind[1434]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:17:01.696830 systemd-logind[1434]: Removed session 7. Jan 29 11:17:01.774181 kubelet[2618]: E0129 11:17:01.774142 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:06.852973 kubelet[2618]: E0129 11:17:06.852782 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:07.781972 kubelet[2618]: E0129 11:17:07.781924 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:09.884612 kubelet[2618]: E0129 11:17:09.884567 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:11.696035 kubelet[2618]: E0129 11:17:11.696000 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:13.542729 update_engine[1437]: I20250129 11:17:13.542665 1437 update_attempter.cc:509] Updating boot flags... Jan 29 11:17:13.571459 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2703) Jan 29 11:17:13.600636 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2704) Jan 29 11:17:14.790930 kubelet[2618]: I0129 11:17:14.790901 2618 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:17:14.791327 containerd[1448]: time="2025-01-29T11:17:14.791188890Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:17:14.791524 kubelet[2618]: I0129 11:17:14.791342 2618 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:17:14.951044 kubelet[2618]: I0129 11:17:14.950982 2618 topology_manager.go:215] "Topology Admit Handler" podUID="74f526b7-4f68-4729-95b8-107417cf2ba3" podNamespace="kube-system" podName="cilium-4n27g" Jan 29 11:17:14.951222 kubelet[2618]: I0129 11:17:14.951197 2618 topology_manager.go:215] "Topology Admit Handler" podUID="a5f1300c-7792-4a23-ae4c-34ae76bb9c41" podNamespace="kube-system" podName="kube-proxy-25nbm" Jan 29 11:17:14.970596 systemd[1]: Created slice kubepods-besteffort-poda5f1300c_7792_4a23_ae4c_34ae76bb9c41.slice - libcontainer container kubepods-besteffort-poda5f1300c_7792_4a23_ae4c_34ae76bb9c41.slice. Jan 29 11:17:14.974743 kubelet[2618]: I0129 11:17:14.974697 2618 topology_manager.go:215] "Topology Admit Handler" podUID="77aff757-b770-456a-955f-1126ffd22913" podNamespace="kube-system" podName="cilium-operator-599987898-x825c" Jan 29 11:17:14.985869 systemd[1]: Created slice kubepods-burstable-pod74f526b7_4f68_4729_95b8_107417cf2ba3.slice - libcontainer container kubepods-burstable-pod74f526b7_4f68_4729_95b8_107417cf2ba3.slice. Jan 29 11:17:14.993092 systemd[1]: Created slice kubepods-besteffort-pod77aff757_b770_456a_955f_1126ffd22913.slice - libcontainer container kubepods-besteffort-pod77aff757_b770_456a_955f_1126ffd22913.slice. Jan 29 11:17:15.064389 kubelet[2618]: I0129 11:17:15.064285 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74f526b7-4f68-4729-95b8-107417cf2ba3-clustermesh-secrets\") pod \"cilium-4n27g\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " pod="kube-system/cilium-4n27g" Jan 29 11:17:15.064389 kubelet[2618]: I0129 11:17:15.064360 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5f1300c-7792-4a23-ae4c-34ae76bb9c41-xtables-lock\") pod \"kube-proxy-25nbm\" (UID: \"a5f1300c-7792-4a23-ae4c-34ae76bb9c41\") " pod="kube-system/kube-proxy-25nbm" Jan 29 11:17:15.064513 kubelet[2618]: I0129 11:17:15.064395 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skkr9\" (UniqueName: \"kubernetes.io/projected/a5f1300c-7792-4a23-ae4c-34ae76bb9c41-kube-api-access-skkr9\") pod \"kube-proxy-25nbm\" (UID: \"a5f1300c-7792-4a23-ae4c-34ae76bb9c41\") " pod="kube-system/kube-proxy-25nbm" Jan 29 11:17:15.064513 kubelet[2618]: I0129 11:17:15.064435 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-bpf-maps\") pod \"cilium-4n27g\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " pod="kube-system/cilium-4n27g" Jan 29 11:17:15.064513 kubelet[2618]: I0129 11:17:15.064451 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-host-proc-sys-net\") pod \"cilium-4n27g\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " pod="kube-system/cilium-4n27g" Jan 29 11:17:15.064513 kubelet[2618]: I0129 11:17:15.064472 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-host-proc-sys-kernel\") pod \"cilium-4n27g\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " pod="kube-system/cilium-4n27g" Jan 29 11:17:15.064513 kubelet[2618]: I0129 11:17:15.064487 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwrx2\" (UniqueName: \"kubernetes.io/projected/74f526b7-4f68-4729-95b8-107417cf2ba3-kube-api-access-pwrx2\") pod \"cilium-4n27g\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " pod="kube-system/cilium-4n27g" Jan 29 11:17:15.064643 kubelet[2618]: I0129 11:17:15.064547 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5f1300c-7792-4a23-ae4c-34ae76bb9c41-lib-modules\") pod \"kube-proxy-25nbm\" (UID: \"a5f1300c-7792-4a23-ae4c-34ae76bb9c41\") " pod="kube-system/kube-proxy-25nbm" Jan 29 11:17:15.064643 kubelet[2618]: I0129 11:17:15.064565 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-etc-cni-netd\") pod \"cilium-4n27g\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " pod="kube-system/cilium-4n27g" Jan 29 11:17:15.064643 kubelet[2618]: I0129 11:17:15.064580 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-hostproc\") pod \"cilium-4n27g\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " pod="kube-system/cilium-4n27g" Jan 29 11:17:15.064643 kubelet[2618]: I0129 11:17:15.064603 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77aff757-b770-456a-955f-1126ffd22913-cilium-config-path\") pod \"cilium-operator-599987898-x825c\" (UID: \"77aff757-b770-456a-955f-1126ffd22913\") " pod="kube-system/cilium-operator-599987898-x825c" Jan 29 11:17:15.064725 kubelet[2618]: I0129 11:17:15.064646 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-cilium-cgroup\") pod \"cilium-4n27g\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " pod="kube-system/cilium-4n27g" Jan 29 11:17:15.064725 kubelet[2618]: I0129 11:17:15.064686 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74f526b7-4f68-4729-95b8-107417cf2ba3-hubble-tls\") pod \"cilium-4n27g\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " pod="kube-system/cilium-4n27g" Jan 29 11:17:15.064725 kubelet[2618]: I0129 11:17:15.064704 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a5f1300c-7792-4a23-ae4c-34ae76bb9c41-kube-proxy\") pod \"kube-proxy-25nbm\" (UID: \"a5f1300c-7792-4a23-ae4c-34ae76bb9c41\") " pod="kube-system/kube-proxy-25nbm" Jan 29 11:17:15.064784 kubelet[2618]: I0129 11:17:15.064730 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-cilium-run\") pod \"cilium-4n27g\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " pod="kube-system/cilium-4n27g" Jan 29 11:17:15.064784 kubelet[2618]: I0129 11:17:15.064751 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-cni-path\") pod \"cilium-4n27g\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " pod="kube-system/cilium-4n27g" Jan 29 11:17:15.064784 kubelet[2618]: I0129 11:17:15.064766 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74f526b7-4f68-4729-95b8-107417cf2ba3-cilium-config-path\") pod \"cilium-4n27g\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " pod="kube-system/cilium-4n27g" Jan 29 11:17:15.064847 kubelet[2618]: I0129 11:17:15.064787 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbnw\" (UniqueName: \"kubernetes.io/projected/77aff757-b770-456a-955f-1126ffd22913-kube-api-access-bfbnw\") pod \"cilium-operator-599987898-x825c\" (UID: \"77aff757-b770-456a-955f-1126ffd22913\") " pod="kube-system/cilium-operator-599987898-x825c" Jan 29 11:17:15.064847 kubelet[2618]: I0129 11:17:15.064804 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-lib-modules\") pod \"cilium-4n27g\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " pod="kube-system/cilium-4n27g" Jan 29 11:17:15.064847 kubelet[2618]: I0129 11:17:15.064818 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-xtables-lock\") pod \"cilium-4n27g\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " pod="kube-system/cilium-4n27g" Jan 29 11:17:15.280537 kubelet[2618]: E0129 11:17:15.280486 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:15.286537 containerd[1448]: time="2025-01-29T11:17:15.286499517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-25nbm,Uid:a5f1300c-7792-4a23-ae4c-34ae76bb9c41,Namespace:kube-system,Attempt:0,}" Jan 29 11:17:15.290756 kubelet[2618]: E0129 11:17:15.290705 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:15.291513 containerd[1448]: time="2025-01-29T11:17:15.291464600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4n27g,Uid:74f526b7-4f68-4729-95b8-107417cf2ba3,Namespace:kube-system,Attempt:0,}" Jan 29 11:17:15.300265 kubelet[2618]: E0129 11:17:15.299855 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:15.300836 containerd[1448]: time="2025-01-29T11:17:15.300649291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-x825c,Uid:77aff757-b770-456a-955f-1126ffd22913,Namespace:kube-system,Attempt:0,}" Jan 29 11:17:15.312943 containerd[1448]: time="2025-01-29T11:17:15.312810880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:17:15.312943 containerd[1448]: time="2025-01-29T11:17:15.312879520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:17:15.312943 containerd[1448]: time="2025-01-29T11:17:15.312894439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:15.313105 containerd[1448]: time="2025-01-29T11:17:15.313048238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:15.316458 containerd[1448]: time="2025-01-29T11:17:15.316278134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:17:15.316788 containerd[1448]: time="2025-01-29T11:17:15.316739771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:17:15.316788 containerd[1448]: time="2025-01-29T11:17:15.316764410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:15.317235 containerd[1448]: time="2025-01-29T11:17:15.317192727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:15.333403 systemd[1]: Started cri-containerd-31c6f810c15d21962c58b4931cf25ed914a3994e9295779b563e48dc641d5b9e.scope - libcontainer container 31c6f810c15d21962c58b4931cf25ed914a3994e9295779b563e48dc641d5b9e. Jan 29 11:17:15.336667 systemd[1]: Started cri-containerd-c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842.scope - libcontainer container c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842. Jan 29 11:17:15.361244 containerd[1448]: time="2025-01-29T11:17:15.361104878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:17:15.361244 containerd[1448]: time="2025-01-29T11:17:15.361210918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:17:15.361730 containerd[1448]: time="2025-01-29T11:17:15.361621995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:15.362221 containerd[1448]: time="2025-01-29T11:17:15.362175790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:15.362490 containerd[1448]: time="2025-01-29T11:17:15.362343429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4n27g,Uid:74f526b7-4f68-4729-95b8-107417cf2ba3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842\"" Jan 29 11:17:15.363250 kubelet[2618]: E0129 11:17:15.363227 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:15.372858 containerd[1448]: time="2025-01-29T11:17:15.372778831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-25nbm,Uid:a5f1300c-7792-4a23-ae4c-34ae76bb9c41,Namespace:kube-system,Attempt:0,} returns sandbox id \"31c6f810c15d21962c58b4931cf25ed914a3994e9295779b563e48dc641d5b9e\"" Jan 29 11:17:15.373741 kubelet[2618]: E0129 11:17:15.373722 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:15.374184 containerd[1448]: time="2025-01-29T11:17:15.374143741Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:17:15.377174 containerd[1448]: time="2025-01-29T11:17:15.377146438Z" level=info msg="CreateContainer within sandbox \"31c6f810c15d21962c58b4931cf25ed914a3994e9295779b563e48dc641d5b9e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:17:15.392568 systemd[1]: Started cri-containerd-5b6c4a73545a48cb28cde13beae286d1a43e17775a5211f1d63a0f97392ccbc1.scope - libcontainer container 5b6c4a73545a48cb28cde13beae286d1a43e17775a5211f1d63a0f97392ccbc1. Jan 29 11:17:15.409450 containerd[1448]: time="2025-01-29T11:17:15.409094559Z" level=info msg="CreateContainer within sandbox \"31c6f810c15d21962c58b4931cf25ed914a3994e9295779b563e48dc641d5b9e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"016c659eb5ba6fc1e8527a40cf7d8485ae0ec48ca9a7547bb443191a486d6255\"" Jan 29 11:17:15.414416 containerd[1448]: time="2025-01-29T11:17:15.414372320Z" level=info msg="StartContainer for \"016c659eb5ba6fc1e8527a40cf7d8485ae0ec48ca9a7547bb443191a486d6255\"" Jan 29 11:17:15.426461 containerd[1448]: time="2025-01-29T11:17:15.426357830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-x825c,Uid:77aff757-b770-456a-955f-1126ffd22913,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b6c4a73545a48cb28cde13beae286d1a43e17775a5211f1d63a0f97392ccbc1\"" Jan 29 11:17:15.427180 kubelet[2618]: E0129 11:17:15.427155 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:15.439560 systemd[1]: Started cri-containerd-016c659eb5ba6fc1e8527a40cf7d8485ae0ec48ca9a7547bb443191a486d6255.scope - libcontainer container 016c659eb5ba6fc1e8527a40cf7d8485ae0ec48ca9a7547bb443191a486d6255. Jan 29 11:17:15.466050 containerd[1448]: time="2025-01-29T11:17:15.466005413Z" level=info msg="StartContainer for \"016c659eb5ba6fc1e8527a40cf7d8485ae0ec48ca9a7547bb443191a486d6255\" returns successfully" Jan 29 11:17:15.797198 kubelet[2618]: E0129 11:17:15.797129 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:15.809009 kubelet[2618]: I0129 11:17:15.808902 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-25nbm" podStartSLOduration=1.808886486 podStartE2EDuration="1.808886486s" podCreationTimestamp="2025-01-29 11:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:17:15.807523456 +0000 UTC m=+17.128139866" watchObservedRunningTime="2025-01-29 11:17:15.808886486 +0000 UTC m=+17.129502896" Jan 29 11:17:21.663202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1586221020.mount: Deactivated successfully. Jan 29 11:17:22.993001 containerd[1448]: time="2025-01-29T11:17:22.992761021Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:22.993879 containerd[1448]: time="2025-01-29T11:17:22.993830136Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 11:17:22.996436 containerd[1448]: time="2025-01-29T11:17:22.994961770Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:22.996583 containerd[1448]: time="2025-01-29T11:17:22.996524403Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.622331502s" Jan 29 11:17:22.996583 containerd[1448]: time="2025-01-29T11:17:22.996576803Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 11:17:22.999461 containerd[1448]: time="2025-01-29T11:17:22.999167150Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:17:23.004005 containerd[1448]: time="2025-01-29T11:17:23.003956688Z" level=info msg="CreateContainer within sandbox \"c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:17:23.029627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2042185762.mount: Deactivated successfully. Jan 29 11:17:23.031670 containerd[1448]: time="2025-01-29T11:17:23.031615605Z" level=info msg="CreateContainer within sandbox \"c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007\"" Jan 29 11:17:23.033021 containerd[1448]: time="2025-01-29T11:17:23.032971439Z" level=info msg="StartContainer for \"69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007\"" Jan 29 11:17:23.067592 systemd[1]: Started cri-containerd-69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007.scope - libcontainer container 69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007. Jan 29 11:17:23.096986 containerd[1448]: time="2025-01-29T11:17:23.096935393Z" level=info msg="StartContainer for \"69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007\" returns successfully" Jan 29 11:17:23.149708 systemd[1]: cri-containerd-69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007.scope: Deactivated successfully. Jan 29 11:17:23.245980 containerd[1448]: time="2025-01-29T11:17:23.236110531Z" level=info msg="shim disconnected" id=69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007 namespace=k8s.io Jan 29 11:17:23.245980 containerd[1448]: time="2025-01-29T11:17:23.245910167Z" level=warning msg="cleaning up after shim disconnected" id=69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007 namespace=k8s.io Jan 29 11:17:23.245980 containerd[1448]: time="2025-01-29T11:17:23.245925287Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:17:23.818532 kubelet[2618]: E0129 11:17:23.818387 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:23.822617 containerd[1448]: time="2025-01-29T11:17:23.822549871Z" level=info msg="CreateContainer within sandbox \"c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:17:23.832848 containerd[1448]: time="2025-01-29T11:17:23.832790225Z" level=info msg="CreateContainer within sandbox \"c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e\"" Jan 29 11:17:23.834036 containerd[1448]: time="2025-01-29T11:17:23.833281583Z" level=info msg="StartContainer for \"293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e\"" Jan 29 11:17:23.869620 systemd[1]: Started cri-containerd-293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e.scope - libcontainer container 293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e. Jan 29 11:17:23.894141 containerd[1448]: time="2025-01-29T11:17:23.893681033Z" level=info msg="StartContainer for \"293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e\" returns successfully" Jan 29 11:17:23.916668 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:17:23.918763 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:17:23.918841 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:17:23.926710 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:17:23.926940 systemd[1]: cri-containerd-293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e.scope: Deactivated successfully. Jan 29 11:17:23.948261 containerd[1448]: time="2025-01-29T11:17:23.948209829Z" level=info msg="shim disconnected" id=293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e namespace=k8s.io Jan 29 11:17:23.948685 containerd[1448]: time="2025-01-29T11:17:23.948314509Z" level=warning msg="cleaning up after shim disconnected" id=293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e namespace=k8s.io Jan 29 11:17:23.948685 containerd[1448]: time="2025-01-29T11:17:23.948324789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:17:23.956659 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:17:24.025801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007-rootfs.mount: Deactivated successfully. Jan 29 11:17:24.821877 kubelet[2618]: E0129 11:17:24.821845 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:24.823919 containerd[1448]: time="2025-01-29T11:17:24.823880787Z" level=info msg="CreateContainer within sandbox \"c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:17:24.868936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1303966778.mount: Deactivated successfully. Jan 29 11:17:24.871317 containerd[1448]: time="2025-01-29T11:17:24.871276548Z" level=info msg="CreateContainer within sandbox \"c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815\"" Jan 29 11:17:24.871838 containerd[1448]: time="2025-01-29T11:17:24.871814426Z" level=info msg="StartContainer for \"b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815\"" Jan 29 11:17:24.912608 systemd[1]: Started cri-containerd-b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815.scope - libcontainer container b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815. Jan 29 11:17:24.945636 systemd[1]: Started sshd@7-10.0.0.135:22-10.0.0.1:44934.service - OpenSSH per-connection server daemon (10.0.0.1:44934). Jan 29 11:17:24.972483 containerd[1448]: time="2025-01-29T11:17:24.972444125Z" level=info msg="StartContainer for \"b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815\" returns successfully" Jan 29 11:17:24.985087 systemd[1]: cri-containerd-b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815.scope: Deactivated successfully. Jan 29 11:17:25.019150 containerd[1448]: time="2025-01-29T11:17:25.018958855Z" level=info msg="shim disconnected" id=b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815 namespace=k8s.io Jan 29 11:17:25.019150 containerd[1448]: time="2025-01-29T11:17:25.019011254Z" level=warning msg="cleaning up after shim disconnected" id=b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815 namespace=k8s.io Jan 29 11:17:25.019150 containerd[1448]: time="2025-01-29T11:17:25.019019014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:17:25.023574 sshd[3176]: Accepted publickey for core from 10.0.0.1 port 44934 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:17:25.025339 sshd-session[3176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:25.026035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815-rootfs.mount: Deactivated successfully. Jan 29 11:17:25.030772 systemd-logind[1434]: New session 8 of user core. Jan 29 11:17:25.034571 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:17:25.155067 sshd[3212]: Connection closed by 10.0.0.1 port 44934 Jan 29 11:17:25.154956 sshd-session[3176]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:25.158621 systemd[1]: sshd@7-10.0.0.135:22-10.0.0.1:44934.service: Deactivated successfully. Jan 29 11:17:25.160355 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:17:25.163726 systemd-logind[1434]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:17:25.164578 systemd-logind[1434]: Removed session 8. Jan 29 11:17:25.826037 kubelet[2618]: E0129 11:17:25.825956 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:25.830181 containerd[1448]: time="2025-01-29T11:17:25.830032549Z" level=info msg="CreateContainer within sandbox \"c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:17:25.842068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3267818648.mount: Deactivated successfully. Jan 29 11:17:25.847985 containerd[1448]: time="2025-01-29T11:17:25.847941679Z" level=info msg="CreateContainer within sandbox \"c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda\"" Jan 29 11:17:25.848607 containerd[1448]: time="2025-01-29T11:17:25.848575917Z" level=info msg="StartContainer for \"a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda\"" Jan 29 11:17:25.874582 systemd[1]: Started cri-containerd-a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda.scope - libcontainer container a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda. Jan 29 11:17:25.895372 systemd[1]: cri-containerd-a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda.scope: Deactivated successfully. Jan 29 11:17:25.897543 containerd[1448]: time="2025-01-29T11:17:25.897426045Z" level=info msg="StartContainer for \"a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda\" returns successfully" Jan 29 11:17:25.918738 containerd[1448]: time="2025-01-29T11:17:25.918686761Z" level=info msg="shim disconnected" id=a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda namespace=k8s.io Jan 29 11:17:25.919025 containerd[1448]: time="2025-01-29T11:17:25.918890960Z" level=warning msg="cleaning up after shim disconnected" id=a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda namespace=k8s.io Jan 29 11:17:25.919025 containerd[1448]: time="2025-01-29T11:17:25.918906760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:17:26.025403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda-rootfs.mount: Deactivated successfully. Jan 29 11:17:26.829294 kubelet[2618]: E0129 11:17:26.829201 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:26.832955 containerd[1448]: time="2025-01-29T11:17:26.832793896Z" level=info msg="CreateContainer within sandbox \"c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:17:26.847887 containerd[1448]: time="2025-01-29T11:17:26.847847600Z" level=info msg="CreateContainer within sandbox \"c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19\"" Jan 29 11:17:26.848534 containerd[1448]: time="2025-01-29T11:17:26.848496798Z" level=info msg="StartContainer for \"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19\"" Jan 29 11:17:26.873608 systemd[1]: Started cri-containerd-6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19.scope - libcontainer container 6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19. Jan 29 11:17:26.901793 containerd[1448]: time="2025-01-29T11:17:26.901678242Z" level=info msg="StartContainer for \"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19\" returns successfully" Jan 29 11:17:27.009917 kubelet[2618]: I0129 11:17:27.009887 2618 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:17:27.034878 kubelet[2618]: I0129 11:17:27.034839 2618 topology_manager.go:215] "Topology Admit Handler" podUID="8af18951-a2c3-4a37-b081-f9918a3669cb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nrm8n" Jan 29 11:17:27.036940 kubelet[2618]: I0129 11:17:27.036908 2618 topology_manager.go:215] "Topology Admit Handler" podUID="45453cce-f88d-4c69-8130-0e07dc35af9d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bb4wd" Jan 29 11:17:27.049958 systemd[1]: Created slice kubepods-burstable-pod8af18951_a2c3_4a37_b081_f9918a3669cb.slice - libcontainer container kubepods-burstable-pod8af18951_a2c3_4a37_b081_f9918a3669cb.slice. Jan 29 11:17:27.057834 systemd[1]: Created slice kubepods-burstable-pod45453cce_f88d_4c69_8130_0e07dc35af9d.slice - libcontainer container kubepods-burstable-pod45453cce_f88d_4c69_8130_0e07dc35af9d.slice. Jan 29 11:17:27.155209 kubelet[2618]: I0129 11:17:27.155047 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swrxf\" (UniqueName: \"kubernetes.io/projected/45453cce-f88d-4c69-8130-0e07dc35af9d-kube-api-access-swrxf\") pod \"coredns-7db6d8ff4d-bb4wd\" (UID: \"45453cce-f88d-4c69-8130-0e07dc35af9d\") " pod="kube-system/coredns-7db6d8ff4d-bb4wd" Jan 29 11:17:27.155209 kubelet[2618]: I0129 11:17:27.155101 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8af18951-a2c3-4a37-b081-f9918a3669cb-config-volume\") pod \"coredns-7db6d8ff4d-nrm8n\" (UID: \"8af18951-a2c3-4a37-b081-f9918a3669cb\") " pod="kube-system/coredns-7db6d8ff4d-nrm8n" Jan 29 11:17:27.155209 kubelet[2618]: I0129 11:17:27.155122 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45453cce-f88d-4c69-8130-0e07dc35af9d-config-volume\") pod \"coredns-7db6d8ff4d-bb4wd\" (UID: \"45453cce-f88d-4c69-8130-0e07dc35af9d\") " pod="kube-system/coredns-7db6d8ff4d-bb4wd" Jan 29 11:17:27.155209 kubelet[2618]: I0129 11:17:27.155139 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbvds\" (UniqueName: \"kubernetes.io/projected/8af18951-a2c3-4a37-b081-f9918a3669cb-kube-api-access-dbvds\") pod \"coredns-7db6d8ff4d-nrm8n\" (UID: \"8af18951-a2c3-4a37-b081-f9918a3669cb\") " pod="kube-system/coredns-7db6d8ff4d-nrm8n" Jan 29 11:17:27.357547 kubelet[2618]: E0129 11:17:27.355457 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:27.357770 containerd[1448]: time="2025-01-29T11:17:27.357726006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nrm8n,Uid:8af18951-a2c3-4a37-b081-f9918a3669cb,Namespace:kube-system,Attempt:0,}" Jan 29 11:17:27.361462 kubelet[2618]: E0129 11:17:27.361311 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:27.362028 containerd[1448]: time="2025-01-29T11:17:27.361983591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bb4wd,Uid:45453cce-f88d-4c69-8130-0e07dc35af9d,Namespace:kube-system,Attempt:0,}" Jan 29 11:17:27.833775 kubelet[2618]: E0129 11:17:27.833745 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:27.847864 containerd[1448]: time="2025-01-29T11:17:27.847816474Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:27.848950 containerd[1448]: time="2025-01-29T11:17:27.848611311Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 11:17:27.850350 containerd[1448]: time="2025-01-29T11:17:27.849775147Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:17:27.850767 kubelet[2618]: I0129 11:17:27.850713 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4n27g" podStartSLOduration=6.21968778 podStartE2EDuration="13.850698904s" podCreationTimestamp="2025-01-29 11:17:14 +0000 UTC" firstStartedPulling="2025-01-29 11:17:15.367657589 +0000 UTC m=+16.688273999" lastFinishedPulling="2025-01-29 11:17:22.998668713 +0000 UTC m=+24.319285123" observedRunningTime="2025-01-29 11:17:27.850288305 +0000 UTC m=+29.170904715" watchObservedRunningTime="2025-01-29 11:17:27.850698904 +0000 UTC m=+29.171315274" Jan 29 11:17:27.852507 containerd[1448]: time="2025-01-29T11:17:27.852456618Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.853256828s" Jan 29 11:17:27.852507 containerd[1448]: time="2025-01-29T11:17:27.852497818Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 11:17:27.855512 containerd[1448]: time="2025-01-29T11:17:27.855475488Z" level=info msg="CreateContainer within sandbox \"5b6c4a73545a48cb28cde13beae286d1a43e17775a5211f1d63a0f97392ccbc1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:17:27.869118 containerd[1448]: time="2025-01-29T11:17:27.869070841Z" level=info msg="CreateContainer within sandbox \"5b6c4a73545a48cb28cde13beae286d1a43e17775a5211f1d63a0f97392ccbc1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5\"" Jan 29 11:17:27.869702 containerd[1448]: time="2025-01-29T11:17:27.869671439Z" level=info msg="StartContainer for \"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5\"" Jan 29 11:17:27.893666 systemd[1]: Started cri-containerd-2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5.scope - libcontainer container 2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5. Jan 29 11:17:27.921094 containerd[1448]: time="2025-01-29T11:17:27.920979941Z" level=info msg="StartContainer for \"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5\" returns successfully" Jan 29 11:17:28.835551 kubelet[2618]: E0129 11:17:28.835237 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:28.837285 kubelet[2618]: E0129 11:17:28.837157 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:28.844609 kubelet[2618]: I0129 11:17:28.844546 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-x825c" podStartSLOduration=2.419543217 podStartE2EDuration="14.844532976s" podCreationTimestamp="2025-01-29 11:17:14 +0000 UTC" firstStartedPulling="2025-01-29 11:17:15.428179976 +0000 UTC m=+16.748796346" lastFinishedPulling="2025-01-29 11:17:27.853169695 +0000 UTC m=+29.173786105" observedRunningTime="2025-01-29 11:17:28.844230897 +0000 UTC m=+30.164847307" watchObservedRunningTime="2025-01-29 11:17:28.844532976 +0000 UTC m=+30.165149386" Jan 29 11:17:29.839556 kubelet[2618]: E0129 11:17:29.839394 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:29.852773 kubelet[2618]: E0129 11:17:29.840149 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:30.189103 systemd[1]: Started sshd@8-10.0.0.135:22-10.0.0.1:44936.service - OpenSSH per-connection server daemon (10.0.0.1:44936). Jan 29 11:17:30.234050 sshd[3485]: Accepted publickey for core from 10.0.0.1 port 44936 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:17:30.235683 sshd-session[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:30.240965 systemd-logind[1434]: New session 9 of user core. Jan 29 11:17:30.251607 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:17:30.372433 sshd[3487]: Connection closed by 10.0.0.1 port 44936 Jan 29 11:17:30.372750 sshd-session[3485]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:30.375798 systemd[1]: sshd@8-10.0.0.135:22-10.0.0.1:44936.service: Deactivated successfully. Jan 29 11:17:30.377508 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:17:30.378952 systemd-logind[1434]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:17:30.379693 systemd-logind[1434]: Removed session 9. Jan 29 11:17:30.850538 kubelet[2618]: E0129 11:17:30.850470 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:31.905259 systemd-networkd[1387]: cilium_host: Link UP Jan 29 11:17:31.905508 systemd-networkd[1387]: cilium_net: Link UP Jan 29 11:17:31.905774 systemd-networkd[1387]: cilium_net: Gained carrier Jan 29 11:17:31.906010 systemd-networkd[1387]: cilium_host: Gained carrier Jan 29 11:17:31.993420 systemd-networkd[1387]: cilium_vxlan: Link UP Jan 29 11:17:31.993490 systemd-networkd[1387]: cilium_vxlan: Gained carrier Jan 29 11:17:32.298451 kernel: NET: Registered PF_ALG protocol family Jan 29 11:17:32.437616 systemd-networkd[1387]: cilium_host: Gained IPv6LL Jan 29 11:17:32.566536 systemd-networkd[1387]: cilium_net: Gained IPv6LL Jan 29 11:17:32.876381 systemd-networkd[1387]: lxc_health: Link UP Jan 29 11:17:32.883233 systemd-networkd[1387]: lxc_health: Gained carrier Jan 29 11:17:33.013570 systemd-networkd[1387]: cilium_vxlan: Gained IPv6LL Jan 29 11:17:33.308491 kubelet[2618]: E0129 11:17:33.299168 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:33.482463 systemd-networkd[1387]: lxcd33f6bdd7c99: Link UP Jan 29 11:17:33.497513 kernel: eth0: renamed from tmp3fcd5 Jan 29 11:17:33.514708 systemd-networkd[1387]: lxc33dcf894363a: Link UP Jan 29 11:17:33.516444 kernel: eth0: renamed from tmp237d4 Jan 29 11:17:33.521071 systemd-networkd[1387]: lxc33dcf894363a: Gained carrier Jan 29 11:17:33.521204 systemd-networkd[1387]: lxcd33f6bdd7c99: Gained carrier Jan 29 11:17:33.851224 kubelet[2618]: E0129 11:17:33.851170 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:34.101579 systemd-networkd[1387]: lxc_health: Gained IPv6LL Jan 29 11:17:34.933636 systemd-networkd[1387]: lxc33dcf894363a: Gained IPv6LL Jan 29 11:17:34.934205 systemd-networkd[1387]: lxcd33f6bdd7c99: Gained IPv6LL Jan 29 11:17:35.387092 systemd[1]: Started sshd@9-10.0.0.135:22-10.0.0.1:35066.service - OpenSSH per-connection server daemon (10.0.0.1:35066). Jan 29 11:17:35.440159 sshd[3881]: Accepted publickey for core from 10.0.0.1 port 35066 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:17:35.441483 sshd-session[3881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:35.445497 systemd-logind[1434]: New session 10 of user core. Jan 29 11:17:35.451573 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:17:35.583535 sshd[3883]: Connection closed by 10.0.0.1 port 35066 Jan 29 11:17:35.583922 sshd-session[3881]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:35.587230 systemd[1]: sshd@9-10.0.0.135:22-10.0.0.1:35066.service: Deactivated successfully. Jan 29 11:17:35.589095 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:17:35.590893 systemd-logind[1434]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:17:35.591740 systemd-logind[1434]: Removed session 10. Jan 29 11:17:37.014952 containerd[1448]: time="2025-01-29T11:17:37.014547152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:17:37.014952 containerd[1448]: time="2025-01-29T11:17:37.014873592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:17:37.014952 containerd[1448]: time="2025-01-29T11:17:37.014885192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:37.015457 containerd[1448]: time="2025-01-29T11:17:37.014953951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:37.019443 containerd[1448]: time="2025-01-29T11:17:37.016697868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:17:37.019443 containerd[1448]: time="2025-01-29T11:17:37.016745668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:17:37.019443 containerd[1448]: time="2025-01-29T11:17:37.016756628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:37.019443 containerd[1448]: time="2025-01-29T11:17:37.016828348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:17:37.034544 systemd[1]: Started cri-containerd-237d473de6ba30a895db7602c93fc9ca9bf8a5424d088cb187e9fc4646be225c.scope - libcontainer container 237d473de6ba30a895db7602c93fc9ca9bf8a5424d088cb187e9fc4646be225c. Jan 29 11:17:37.041963 systemd[1]: Started cri-containerd-3fcd5e0c2ec6177f3f37afc93d14ee295fd8536311010360b459bf8e991d52c3.scope - libcontainer container 3fcd5e0c2ec6177f3f37afc93d14ee295fd8536311010360b459bf8e991d52c3. Jan 29 11:17:37.050830 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:17:37.054618 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:17:37.070122 containerd[1448]: time="2025-01-29T11:17:37.070089252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nrm8n,Uid:8af18951-a2c3-4a37-b081-f9918a3669cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"237d473de6ba30a895db7602c93fc9ca9bf8a5424d088cb187e9fc4646be225c\"" Jan 29 11:17:37.071128 kubelet[2618]: E0129 11:17:37.070805 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:37.072395 containerd[1448]: time="2025-01-29T11:17:37.072372447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bb4wd,Uid:45453cce-f88d-4c69-8130-0e07dc35af9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fcd5e0c2ec6177f3f37afc93d14ee295fd8536311010360b459bf8e991d52c3\"" Jan 29 11:17:37.073644 kubelet[2618]: E0129 11:17:37.073462 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:37.078625 containerd[1448]: time="2025-01-29T11:17:37.078593716Z" level=info msg="CreateContainer within sandbox \"3fcd5e0c2ec6177f3f37afc93d14ee295fd8536311010360b459bf8e991d52c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:17:37.078960 containerd[1448]: time="2025-01-29T11:17:37.078927436Z" level=info msg="CreateContainer within sandbox \"237d473de6ba30a895db7602c93fc9ca9bf8a5424d088cb187e9fc4646be225c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:17:37.095389 containerd[1448]: time="2025-01-29T11:17:37.095357246Z" level=info msg="CreateContainer within sandbox \"3fcd5e0c2ec6177f3f37afc93d14ee295fd8536311010360b459bf8e991d52c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7314e00e00ae87d2dbc6aac347dd0b98985cca14aa23a72cfbf68499827e49b9\"" Jan 29 11:17:37.096119 containerd[1448]: time="2025-01-29T11:17:37.096016645Z" level=info msg="StartContainer for \"7314e00e00ae87d2dbc6aac347dd0b98985cca14aa23a72cfbf68499827e49b9\"" Jan 29 11:17:37.097396 containerd[1448]: time="2025-01-29T11:17:37.097368682Z" level=info msg="CreateContainer within sandbox \"237d473de6ba30a895db7602c93fc9ca9bf8a5424d088cb187e9fc4646be225c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e7e3abef49b20ad4f426fc220013b425cfcba77c3f8f4d0fcd399339e31455e\"" Jan 29 11:17:37.097768 containerd[1448]: time="2025-01-29T11:17:37.097742522Z" level=info msg="StartContainer for \"9e7e3abef49b20ad4f426fc220013b425cfcba77c3f8f4d0fcd399339e31455e\"" Jan 29 11:17:37.120571 systemd[1]: Started cri-containerd-7314e00e00ae87d2dbc6aac347dd0b98985cca14aa23a72cfbf68499827e49b9.scope - libcontainer container 7314e00e00ae87d2dbc6aac347dd0b98985cca14aa23a72cfbf68499827e49b9. Jan 29 11:17:37.123181 systemd[1]: Started cri-containerd-9e7e3abef49b20ad4f426fc220013b425cfcba77c3f8f4d0fcd399339e31455e.scope - libcontainer container 9e7e3abef49b20ad4f426fc220013b425cfcba77c3f8f4d0fcd399339e31455e. Jan 29 11:17:37.148699 containerd[1448]: time="2025-01-29T11:17:37.147618831Z" level=info msg="StartContainer for \"7314e00e00ae87d2dbc6aac347dd0b98985cca14aa23a72cfbf68499827e49b9\" returns successfully" Jan 29 11:17:37.157598 containerd[1448]: time="2025-01-29T11:17:37.157568653Z" level=info msg="StartContainer for \"9e7e3abef49b20ad4f426fc220013b425cfcba77c3f8f4d0fcd399339e31455e\" returns successfully" Jan 29 11:17:37.863374 kubelet[2618]: E0129 11:17:37.863241 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:37.866609 kubelet[2618]: E0129 11:17:37.866575 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:37.872979 kubelet[2618]: I0129 11:17:37.872755 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bb4wd" podStartSLOduration=23.872741919 podStartE2EDuration="23.872741919s" podCreationTimestamp="2025-01-29 11:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:17:37.87217156 +0000 UTC m=+39.192787970" watchObservedRunningTime="2025-01-29 11:17:37.872741919 +0000 UTC m=+39.193358369" Jan 29 11:17:37.881898 kubelet[2618]: I0129 11:17:37.881250 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nrm8n" podStartSLOduration=23.881237343 podStartE2EDuration="23.881237343s" podCreationTimestamp="2025-01-29 11:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:17:37.881155383 +0000 UTC m=+39.201771793" watchObservedRunningTime="2025-01-29 11:17:37.881237343 +0000 UTC m=+39.201853713" Jan 29 11:17:38.868056 kubelet[2618]: E0129 11:17:38.868022 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:38.868748 kubelet[2618]: E0129 11:17:38.868371 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:39.869181 kubelet[2618]: E0129 11:17:39.869145 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:39.869546 kubelet[2618]: E0129 11:17:39.869186 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:17:40.597886 systemd[1]: Started sshd@10-10.0.0.135:22-10.0.0.1:35076.service - OpenSSH per-connection server daemon (10.0.0.1:35076). Jan 29 11:17:40.648608 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 35076 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:17:40.650001 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:40.653420 systemd-logind[1434]: New session 11 of user core. Jan 29 11:17:40.659648 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:17:40.769555 sshd[4072]: Connection closed by 10.0.0.1 port 35076 Jan 29 11:17:40.770993 sshd-session[4070]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:40.778753 systemd[1]: sshd@10-10.0.0.135:22-10.0.0.1:35076.service: Deactivated successfully. Jan 29 11:17:40.780118 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:17:40.781262 systemd-logind[1434]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:17:40.782341 systemd[1]: Started sshd@11-10.0.0.135:22-10.0.0.1:35088.service - OpenSSH per-connection server daemon (10.0.0.1:35088). Jan 29 11:17:40.783899 systemd-logind[1434]: Removed session 11. Jan 29 11:17:40.828836 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 35088 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:17:40.830081 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:40.833945 systemd-logind[1434]: New session 12 of user core. Jan 29 11:17:40.841544 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:17:40.994247 sshd[4089]: Connection closed by 10.0.0.1 port 35088 Jan 29 11:17:40.994970 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:41.006116 systemd[1]: sshd@11-10.0.0.135:22-10.0.0.1:35088.service: Deactivated successfully. Jan 29 11:17:41.008824 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:17:41.010710 systemd-logind[1434]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:17:41.021033 systemd[1]: Started sshd@12-10.0.0.135:22-10.0.0.1:35102.service - OpenSSH per-connection server daemon (10.0.0.1:35102). Jan 29 11:17:41.022076 systemd-logind[1434]: Removed session 12. Jan 29 11:17:41.068482 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 35102 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:17:41.069629 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:41.073637 systemd-logind[1434]: New session 13 of user core. Jan 29 11:17:41.080527 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:17:41.187245 sshd[4102]: Connection closed by 10.0.0.1 port 35102 Jan 29 11:17:41.187565 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:41.190367 systemd[1]: sshd@12-10.0.0.135:22-10.0.0.1:35102.service: Deactivated successfully. Jan 29 11:17:41.192201 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:17:41.192870 systemd-logind[1434]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:17:41.193736 systemd-logind[1434]: Removed session 13. Jan 29 11:17:46.198005 systemd[1]: Started sshd@13-10.0.0.135:22-10.0.0.1:40550.service - OpenSSH per-connection server daemon (10.0.0.1:40550). Jan 29 11:17:46.244421 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 40550 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:17:46.245743 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:46.249995 systemd-logind[1434]: New session 14 of user core. Jan 29 11:17:46.263664 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:17:46.372663 sshd[4120]: Connection closed by 10.0.0.1 port 40550 Jan 29 11:17:46.373007 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:46.376254 systemd[1]: sshd@13-10.0.0.135:22-10.0.0.1:40550.service: Deactivated successfully. Jan 29 11:17:46.377903 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:17:46.378650 systemd-logind[1434]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:17:46.379541 systemd-logind[1434]: Removed session 14. Jan 29 11:17:51.386938 systemd[1]: Started sshd@14-10.0.0.135:22-10.0.0.1:40560.service - OpenSSH per-connection server daemon (10.0.0.1:40560). Jan 29 11:17:51.432734 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 40560 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:17:51.433964 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:51.438057 systemd-logind[1434]: New session 15 of user core. Jan 29 11:17:51.445544 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:17:51.555276 sshd[4134]: Connection closed by 10.0.0.1 port 40560 Jan 29 11:17:51.555600 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:51.567055 systemd[1]: sshd@14-10.0.0.135:22-10.0.0.1:40560.service: Deactivated successfully. Jan 29 11:17:51.568790 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:17:51.570586 systemd-logind[1434]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:17:51.578656 systemd[1]: Started sshd@15-10.0.0.135:22-10.0.0.1:40576.service - OpenSSH per-connection server daemon (10.0.0.1:40576). Jan 29 11:17:51.580749 systemd-logind[1434]: Removed session 15. Jan 29 11:17:51.620222 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 40576 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:17:51.621322 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:51.625509 systemd-logind[1434]: New session 16 of user core. Jan 29 11:17:51.632596 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:17:51.838461 sshd[4149]: Connection closed by 10.0.0.1 port 40576 Jan 29 11:17:51.838877 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:51.850777 systemd[1]: sshd@15-10.0.0.135:22-10.0.0.1:40576.service: Deactivated successfully. Jan 29 11:17:51.852270 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:17:51.853441 systemd-logind[1434]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:17:51.862667 systemd[1]: Started sshd@16-10.0.0.135:22-10.0.0.1:40578.service - OpenSSH per-connection server daemon (10.0.0.1:40578). Jan 29 11:17:51.863545 systemd-logind[1434]: Removed session 16. Jan 29 11:17:51.910272 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 40578 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:17:51.911611 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:51.915291 systemd-logind[1434]: New session 17 of user core. Jan 29 11:17:51.922543 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:17:53.190869 sshd[4161]: Connection closed by 10.0.0.1 port 40578 Jan 29 11:17:53.191362 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:53.203122 systemd[1]: sshd@16-10.0.0.135:22-10.0.0.1:40578.service: Deactivated successfully. Jan 29 11:17:53.204809 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:17:53.207445 systemd-logind[1434]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:17:53.214881 systemd[1]: Started sshd@17-10.0.0.135:22-10.0.0.1:37434.service - OpenSSH per-connection server daemon (10.0.0.1:37434). Jan 29 11:17:53.216129 systemd-logind[1434]: Removed session 17. Jan 29 11:17:53.258520 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 37434 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:17:53.259967 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:53.263697 systemd-logind[1434]: New session 18 of user core. Jan 29 11:17:53.275613 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:17:53.484845 sshd[4183]: Connection closed by 10.0.0.1 port 37434 Jan 29 11:17:53.485595 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:53.492870 systemd[1]: sshd@17-10.0.0.135:22-10.0.0.1:37434.service: Deactivated successfully. Jan 29 11:17:53.494260 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:17:53.495784 systemd-logind[1434]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:17:53.505247 systemd[1]: Started sshd@18-10.0.0.135:22-10.0.0.1:37438.service - OpenSSH per-connection server daemon (10.0.0.1:37438). Jan 29 11:17:53.508641 systemd-logind[1434]: Removed session 18. Jan 29 11:17:53.556897 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 37438 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:17:53.558183 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:53.562630 systemd-logind[1434]: New session 19 of user core. Jan 29 11:17:53.570598 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:17:53.679210 sshd[4196]: Connection closed by 10.0.0.1 port 37438 Jan 29 11:17:53.679570 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:53.683014 systemd[1]: sshd@18-10.0.0.135:22-10.0.0.1:37438.service: Deactivated successfully. Jan 29 11:17:53.684907 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:17:53.687132 systemd-logind[1434]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:17:53.688142 systemd-logind[1434]: Removed session 19. Jan 29 11:17:58.689820 systemd[1]: Started sshd@19-10.0.0.135:22-10.0.0.1:37452.service - OpenSSH per-connection server daemon (10.0.0.1:37452). Jan 29 11:17:58.737845 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 37452 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:17:58.738428 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:17:58.743495 systemd-logind[1434]: New session 20 of user core. Jan 29 11:17:58.749811 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:17:58.859550 sshd[4214]: Connection closed by 10.0.0.1 port 37452 Jan 29 11:17:58.859879 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:58.863150 systemd[1]: sshd@19-10.0.0.135:22-10.0.0.1:37452.service: Deactivated successfully. Jan 29 11:17:58.864817 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:17:58.865550 systemd-logind[1434]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:17:58.866340 systemd-logind[1434]: Removed session 20. Jan 29 11:18:03.882024 systemd[1]: Started sshd@20-10.0.0.135:22-10.0.0.1:40388.service - OpenSSH per-connection server daemon (10.0.0.1:40388). Jan 29 11:18:03.928274 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 40388 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:18:03.929946 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:18:03.933746 systemd-logind[1434]: New session 21 of user core. Jan 29 11:18:03.941569 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:18:04.048690 sshd[4230]: Connection closed by 10.0.0.1 port 40388 Jan 29 11:18:04.049035 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Jan 29 11:18:04.052579 systemd[1]: sshd@20-10.0.0.135:22-10.0.0.1:40388.service: Deactivated successfully. Jan 29 11:18:04.054161 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:18:04.055869 systemd-logind[1434]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:18:04.056936 systemd-logind[1434]: Removed session 21. Jan 29 11:18:09.059907 systemd[1]: Started sshd@21-10.0.0.135:22-10.0.0.1:40398.service - OpenSSH per-connection server daemon (10.0.0.1:40398). Jan 29 11:18:09.105364 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 40398 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:18:09.106509 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:18:09.110615 systemd-logind[1434]: New session 22 of user core. Jan 29 11:18:09.116585 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:18:09.225851 sshd[4246]: Connection closed by 10.0.0.1 port 40398 Jan 29 11:18:09.226303 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Jan 29 11:18:09.232781 systemd[1]: sshd@21-10.0.0.135:22-10.0.0.1:40398.service: Deactivated successfully. Jan 29 11:18:09.234191 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:18:09.235785 systemd-logind[1434]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:18:09.243651 systemd[1]: Started sshd@22-10.0.0.135:22-10.0.0.1:40406.service - OpenSSH per-connection server daemon (10.0.0.1:40406). Jan 29 11:18:09.244714 systemd-logind[1434]: Removed session 22. Jan 29 11:18:09.284964 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 40406 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:18:09.286123 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:18:09.290070 systemd-logind[1434]: New session 23 of user core. Jan 29 11:18:09.297642 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:18:11.181885 containerd[1448]: time="2025-01-29T11:18:11.181755995Z" level=info msg="StopContainer for \"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5\" with timeout 30 (s)" Jan 29 11:18:11.182468 containerd[1448]: time="2025-01-29T11:18:11.182092394Z" level=info msg="Stop container \"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5\" with signal terminated" Jan 29 11:18:11.192692 systemd[1]: cri-containerd-2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5.scope: Deactivated successfully. Jan 29 11:18:11.205400 systemd[1]: run-containerd-runc-k8s.io-6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19-runc.Uw5Kcn.mount: Deactivated successfully. Jan 29 11:18:11.212793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5-rootfs.mount: Deactivated successfully. Jan 29 11:18:11.221608 containerd[1448]: time="2025-01-29T11:18:11.221507697Z" level=info msg="shim disconnected" id=2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5 namespace=k8s.io Jan 29 11:18:11.221608 containerd[1448]: time="2025-01-29T11:18:11.221599937Z" level=warning msg="cleaning up after shim disconnected" id=2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5 namespace=k8s.io Jan 29 11:18:11.221608 containerd[1448]: time="2025-01-29T11:18:11.221609417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:18:11.223962 containerd[1448]: time="2025-01-29T11:18:11.223679856Z" level=info msg="StopContainer for \"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19\" with timeout 2 (s)" Jan 29 11:18:11.224326 containerd[1448]: time="2025-01-29T11:18:11.224071376Z" level=info msg="Stop container \"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19\" with signal terminated" Jan 29 11:18:11.245273 systemd-networkd[1387]: lxc_health: Link DOWN Jan 29 11:18:11.245279 systemd-networkd[1387]: lxc_health: Lost carrier Jan 29 11:18:11.255878 containerd[1448]: time="2025-01-29T11:18:11.255822002Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:18:11.266936 systemd[1]: cri-containerd-6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19.scope: Deactivated successfully. Jan 29 11:18:11.268311 systemd[1]: cri-containerd-6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19.scope: Consumed 6.434s CPU time. Jan 29 11:18:11.281057 containerd[1448]: time="2025-01-29T11:18:11.281011632Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:18:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:18:11.284952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19-rootfs.mount: Deactivated successfully. Jan 29 11:18:11.285293 containerd[1448]: time="2025-01-29T11:18:11.285115150Z" level=info msg="StopContainer for \"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5\" returns successfully" Jan 29 11:18:11.286686 containerd[1448]: time="2025-01-29T11:18:11.286593029Z" level=info msg="StopPodSandbox for \"5b6c4a73545a48cb28cde13beae286d1a43e17775a5211f1d63a0f97392ccbc1\"" Jan 29 11:18:11.287568 containerd[1448]: time="2025-01-29T11:18:11.286689269Z" level=info msg="Container to stop \"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:18:11.288597 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b6c4a73545a48cb28cde13beae286d1a43e17775a5211f1d63a0f97392ccbc1-shm.mount: Deactivated successfully. Jan 29 11:18:11.290797 containerd[1448]: time="2025-01-29T11:18:11.290713947Z" level=info msg="shim disconnected" id=6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19 namespace=k8s.io Jan 29 11:18:11.290797 containerd[1448]: time="2025-01-29T11:18:11.290765347Z" level=warning msg="cleaning up after shim disconnected" id=6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19 namespace=k8s.io Jan 29 11:18:11.290797 containerd[1448]: time="2025-01-29T11:18:11.290775427Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:18:11.294972 systemd[1]: cri-containerd-5b6c4a73545a48cb28cde13beae286d1a43e17775a5211f1d63a0f97392ccbc1.scope: Deactivated successfully. Jan 29 11:18:11.314087 containerd[1448]: time="2025-01-29T11:18:11.313440017Z" level=info msg="StopContainer for \"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19\" returns successfully" Jan 29 11:18:11.314087 containerd[1448]: time="2025-01-29T11:18:11.313920937Z" level=info msg="StopPodSandbox for \"c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842\"" Jan 29 11:18:11.314087 containerd[1448]: time="2025-01-29T11:18:11.313951897Z" level=info msg="Container to stop \"b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:18:11.314087 containerd[1448]: time="2025-01-29T11:18:11.313963537Z" level=info msg="Container to stop \"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:18:11.314087 containerd[1448]: time="2025-01-29T11:18:11.313972297Z" level=info msg="Container to stop \"69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:18:11.314087 containerd[1448]: time="2025-01-29T11:18:11.313981137Z" level=info msg="Container to stop \"293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:18:11.314087 containerd[1448]: time="2025-01-29T11:18:11.313989617Z" level=info msg="Container to stop \"a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:18:11.319574 systemd[1]: cri-containerd-c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842.scope: Deactivated successfully. Jan 29 11:18:11.322678 containerd[1448]: time="2025-01-29T11:18:11.322294854Z" level=info msg="shim disconnected" id=5b6c4a73545a48cb28cde13beae286d1a43e17775a5211f1d63a0f97392ccbc1 namespace=k8s.io Jan 29 11:18:11.322678 containerd[1448]: time="2025-01-29T11:18:11.322674573Z" level=warning msg="cleaning up after shim disconnected" id=5b6c4a73545a48cb28cde13beae286d1a43e17775a5211f1d63a0f97392ccbc1 namespace=k8s.io Jan 29 11:18:11.322678 containerd[1448]: time="2025-01-29T11:18:11.322685333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:18:11.349320 containerd[1448]: time="2025-01-29T11:18:11.349268202Z" level=info msg="TearDown network for sandbox \"5b6c4a73545a48cb28cde13beae286d1a43e17775a5211f1d63a0f97392ccbc1\" successfully" Jan 29 11:18:11.349320 containerd[1448]: time="2025-01-29T11:18:11.349303322Z" level=info msg="StopPodSandbox for \"5b6c4a73545a48cb28cde13beae286d1a43e17775a5211f1d63a0f97392ccbc1\" returns successfully" Jan 29 11:18:11.353809 containerd[1448]: time="2025-01-29T11:18:11.353735800Z" level=info msg="shim disconnected" id=c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842 namespace=k8s.io Jan 29 11:18:11.353809 containerd[1448]: time="2025-01-29T11:18:11.353791160Z" level=warning msg="cleaning up after shim disconnected" id=c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842 namespace=k8s.io Jan 29 11:18:11.353809 containerd[1448]: time="2025-01-29T11:18:11.353799720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:18:11.366207 containerd[1448]: time="2025-01-29T11:18:11.366160355Z" level=info msg="TearDown network for sandbox \"c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842\" successfully" Jan 29 11:18:11.366207 containerd[1448]: time="2025-01-29T11:18:11.366201195Z" level=info msg="StopPodSandbox for \"c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842\" returns successfully" Jan 29 11:18:11.421146 kubelet[2618]: I0129 11:18:11.421106 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-host-proc-sys-kernel\") pod \"74f526b7-4f68-4729-95b8-107417cf2ba3\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " Jan 29 11:18:11.421146 kubelet[2618]: I0129 11:18:11.421150 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77aff757-b770-456a-955f-1126ffd22913-cilium-config-path\") pod \"77aff757-b770-456a-955f-1126ffd22913\" (UID: \"77aff757-b770-456a-955f-1126ffd22913\") " Jan 29 11:18:11.421566 kubelet[2618]: I0129 11:18:11.421166 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-etc-cni-netd\") pod \"74f526b7-4f68-4729-95b8-107417cf2ba3\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " Jan 29 11:18:11.421566 kubelet[2618]: I0129 11:18:11.421183 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-xtables-lock\") pod \"74f526b7-4f68-4729-95b8-107417cf2ba3\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " Jan 29 11:18:11.421566 kubelet[2618]: I0129 11:18:11.421201 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfbnw\" (UniqueName: \"kubernetes.io/projected/77aff757-b770-456a-955f-1126ffd22913-kube-api-access-bfbnw\") pod \"77aff757-b770-456a-955f-1126ffd22913\" (UID: \"77aff757-b770-456a-955f-1126ffd22913\") " Jan 29 11:18:11.421566 kubelet[2618]: I0129 11:18:11.421220 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74f526b7-4f68-4729-95b8-107417cf2ba3-hubble-tls\") pod \"74f526b7-4f68-4729-95b8-107417cf2ba3\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " Jan 29 11:18:11.421566 kubelet[2618]: I0129 11:18:11.421235 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74f526b7-4f68-4729-95b8-107417cf2ba3-cilium-config-path\") pod \"74f526b7-4f68-4729-95b8-107417cf2ba3\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " Jan 29 11:18:11.421566 kubelet[2618]: I0129 11:18:11.421253 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74f526b7-4f68-4729-95b8-107417cf2ba3-clustermesh-secrets\") pod \"74f526b7-4f68-4729-95b8-107417cf2ba3\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " Jan 29 11:18:11.421748 kubelet[2618]: I0129 11:18:11.421270 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwrx2\" (UniqueName: \"kubernetes.io/projected/74f526b7-4f68-4729-95b8-107417cf2ba3-kube-api-access-pwrx2\") pod \"74f526b7-4f68-4729-95b8-107417cf2ba3\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " Jan 29 11:18:11.421748 kubelet[2618]: I0129 11:18:11.421283 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-cilium-cgroup\") pod \"74f526b7-4f68-4729-95b8-107417cf2ba3\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " Jan 29 11:18:11.421748 kubelet[2618]: I0129 11:18:11.421297 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-cni-path\") pod \"74f526b7-4f68-4729-95b8-107417cf2ba3\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " Jan 29 11:18:11.421748 kubelet[2618]: I0129 11:18:11.421312 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-host-proc-sys-net\") pod \"74f526b7-4f68-4729-95b8-107417cf2ba3\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " Jan 29 11:18:11.421748 kubelet[2618]: I0129 11:18:11.421328 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-cilium-run\") pod \"74f526b7-4f68-4729-95b8-107417cf2ba3\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " Jan 29 11:18:11.421748 kubelet[2618]: I0129 11:18:11.421342 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-lib-modules\") pod \"74f526b7-4f68-4729-95b8-107417cf2ba3\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " Jan 29 11:18:11.421878 kubelet[2618]: I0129 11:18:11.421357 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-bpf-maps\") pod \"74f526b7-4f68-4729-95b8-107417cf2ba3\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " Jan 29 11:18:11.421878 kubelet[2618]: I0129 11:18:11.421370 2618 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-hostproc\") pod \"74f526b7-4f68-4729-95b8-107417cf2ba3\" (UID: \"74f526b7-4f68-4729-95b8-107417cf2ba3\") " Jan 29 11:18:11.426051 kubelet[2618]: I0129 11:18:11.425636 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "74f526b7-4f68-4729-95b8-107417cf2ba3" (UID: "74f526b7-4f68-4729-95b8-107417cf2ba3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:18:11.426051 kubelet[2618]: I0129 11:18:11.425740 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "74f526b7-4f68-4729-95b8-107417cf2ba3" (UID: "74f526b7-4f68-4729-95b8-107417cf2ba3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:18:11.426051 kubelet[2618]: I0129 11:18:11.425894 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-hostproc" (OuterVolumeSpecName: "hostproc") pod "74f526b7-4f68-4729-95b8-107417cf2ba3" (UID: "74f526b7-4f68-4729-95b8-107417cf2ba3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:18:11.431527 kubelet[2618]: I0129 11:18:11.431189 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77aff757-b770-456a-955f-1126ffd22913-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "77aff757-b770-456a-955f-1126ffd22913" (UID: "77aff757-b770-456a-955f-1126ffd22913"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:18:11.431527 kubelet[2618]: I0129 11:18:11.431254 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "74f526b7-4f68-4729-95b8-107417cf2ba3" (UID: "74f526b7-4f68-4729-95b8-107417cf2ba3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:18:11.433434 kubelet[2618]: I0129 11:18:11.432255 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74f526b7-4f68-4729-95b8-107417cf2ba3-kube-api-access-pwrx2" (OuterVolumeSpecName: "kube-api-access-pwrx2") pod "74f526b7-4f68-4729-95b8-107417cf2ba3" (UID: "74f526b7-4f68-4729-95b8-107417cf2ba3"). InnerVolumeSpecName "kube-api-access-pwrx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:11.433434 kubelet[2618]: I0129 11:18:11.432316 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "74f526b7-4f68-4729-95b8-107417cf2ba3" (UID: "74f526b7-4f68-4729-95b8-107417cf2ba3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:18:11.433434 kubelet[2618]: I0129 11:18:11.432335 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-cni-path" (OuterVolumeSpecName: "cni-path") pod "74f526b7-4f68-4729-95b8-107417cf2ba3" (UID: "74f526b7-4f68-4729-95b8-107417cf2ba3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:18:11.433434 kubelet[2618]: I0129 11:18:11.432353 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "74f526b7-4f68-4729-95b8-107417cf2ba3" (UID: "74f526b7-4f68-4729-95b8-107417cf2ba3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:18:11.433434 kubelet[2618]: I0129 11:18:11.432375 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "74f526b7-4f68-4729-95b8-107417cf2ba3" (UID: "74f526b7-4f68-4729-95b8-107417cf2ba3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:18:11.433615 kubelet[2618]: I0129 11:18:11.432394 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "74f526b7-4f68-4729-95b8-107417cf2ba3" (UID: "74f526b7-4f68-4729-95b8-107417cf2ba3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:18:11.433615 kubelet[2618]: I0129 11:18:11.432452 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "74f526b7-4f68-4729-95b8-107417cf2ba3" (UID: "74f526b7-4f68-4729-95b8-107417cf2ba3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:18:11.433615 kubelet[2618]: I0129 11:18:11.432648 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74f526b7-4f68-4729-95b8-107417cf2ba3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "74f526b7-4f68-4729-95b8-107417cf2ba3" (UID: "74f526b7-4f68-4729-95b8-107417cf2ba3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:11.433615 kubelet[2618]: I0129 11:18:11.433148 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74f526b7-4f68-4729-95b8-107417cf2ba3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "74f526b7-4f68-4729-95b8-107417cf2ba3" (UID: "74f526b7-4f68-4729-95b8-107417cf2ba3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:18:11.433615 kubelet[2618]: I0129 11:18:11.433544 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74f526b7-4f68-4729-95b8-107417cf2ba3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "74f526b7-4f68-4729-95b8-107417cf2ba3" (UID: "74f526b7-4f68-4729-95b8-107417cf2ba3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:11.434295 kubelet[2618]: I0129 11:18:11.434264 2618 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77aff757-b770-456a-955f-1126ffd22913-kube-api-access-bfbnw" (OuterVolumeSpecName: "kube-api-access-bfbnw") pod "77aff757-b770-456a-955f-1126ffd22913" (UID: "77aff757-b770-456a-955f-1126ffd22913"). InnerVolumeSpecName "kube-api-access-bfbnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:11.521687 kubelet[2618]: I0129 11:18:11.521641 2618 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.521687 kubelet[2618]: I0129 11:18:11.521676 2618 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.521687 kubelet[2618]: I0129 11:18:11.521684 2618 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.521687 kubelet[2618]: I0129 11:18:11.521692 2618 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.521687 kubelet[2618]: I0129 11:18:11.521700 2618 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.521883 kubelet[2618]: I0129 11:18:11.521708 2618 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.521883 kubelet[2618]: I0129 11:18:11.521717 2618 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77aff757-b770-456a-955f-1126ffd22913-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.521883 kubelet[2618]: I0129 11:18:11.521724 2618 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.521883 kubelet[2618]: I0129 11:18:11.521732 2618 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bfbnw\" (UniqueName: \"kubernetes.io/projected/77aff757-b770-456a-955f-1126ffd22913-kube-api-access-bfbnw\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.521883 kubelet[2618]: I0129 11:18:11.521740 2618 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.521883 kubelet[2618]: I0129 11:18:11.521747 2618 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74f526b7-4f68-4729-95b8-107417cf2ba3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.521883 kubelet[2618]: I0129 11:18:11.521754 2618 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74f526b7-4f68-4729-95b8-107417cf2ba3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.521883 kubelet[2618]: I0129 11:18:11.521761 2618 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74f526b7-4f68-4729-95b8-107417cf2ba3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.522048 kubelet[2618]: I0129 11:18:11.521768 2618 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pwrx2\" (UniqueName: \"kubernetes.io/projected/74f526b7-4f68-4729-95b8-107417cf2ba3-kube-api-access-pwrx2\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.522048 kubelet[2618]: I0129 11:18:11.521776 2618 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.522048 kubelet[2618]: I0129 11:18:11.521783 2618 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74f526b7-4f68-4729-95b8-107417cf2ba3-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:18:11.944106 kubelet[2618]: I0129 11:18:11.944061 2618 scope.go:117] "RemoveContainer" containerID="2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5" Jan 29 11:18:11.946464 containerd[1448]: time="2025-01-29T11:18:11.945202023Z" level=info msg="RemoveContainer for \"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5\"" Jan 29 11:18:11.949859 containerd[1448]: time="2025-01-29T11:18:11.949810381Z" level=info msg="RemoveContainer for \"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5\" returns successfully" Jan 29 11:18:11.950342 kubelet[2618]: I0129 11:18:11.950045 2618 scope.go:117] "RemoveContainer" containerID="2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5" Jan 29 11:18:11.950439 containerd[1448]: time="2025-01-29T11:18:11.950237341Z" level=error msg="ContainerStatus for \"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5\": not found" Jan 29 11:18:11.951986 systemd[1]: Removed slice kubepods-besteffort-pod77aff757_b770_456a_955f_1126ffd22913.slice - libcontainer container kubepods-besteffort-pod77aff757_b770_456a_955f_1126ffd22913.slice. Jan 29 11:18:11.953516 systemd[1]: Removed slice kubepods-burstable-pod74f526b7_4f68_4729_95b8_107417cf2ba3.slice - libcontainer container kubepods-burstable-pod74f526b7_4f68_4729_95b8_107417cf2ba3.slice. Jan 29 11:18:11.953602 systemd[1]: kubepods-burstable-pod74f526b7_4f68_4729_95b8_107417cf2ba3.slice: Consumed 6.598s CPU time. Jan 29 11:18:11.958389 kubelet[2618]: E0129 11:18:11.958351 2618 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5\": not found" containerID="2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5" Jan 29 11:18:11.958580 kubelet[2618]: I0129 11:18:11.958494 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5"} err="failed to get container status \"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5\": rpc error: code = NotFound desc = an error occurred when try to find container \"2225aa8c95cad83639dde959ee09ad0906f0a068e0be1c895c47adf57cae6fd5\": not found" Jan 29 11:18:11.958656 kubelet[2618]: I0129 11:18:11.958644 2618 scope.go:117] "RemoveContainer" containerID="6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19" Jan 29 11:18:11.960163 containerd[1448]: time="2025-01-29T11:18:11.960131976Z" level=info msg="RemoveContainer for \"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19\"" Jan 29 11:18:11.963307 containerd[1448]: time="2025-01-29T11:18:11.963221135Z" level=info msg="RemoveContainer for \"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19\" returns successfully" Jan 29 11:18:11.963688 kubelet[2618]: I0129 11:18:11.963541 2618 scope.go:117] "RemoveContainer" containerID="a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda" Jan 29 11:18:11.964641 containerd[1448]: time="2025-01-29T11:18:11.964617214Z" level=info msg="RemoveContainer for \"a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda\"" Jan 29 11:18:11.966870 containerd[1448]: time="2025-01-29T11:18:11.966843294Z" level=info msg="RemoveContainer for \"a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda\" returns successfully" Jan 29 11:18:11.967039 kubelet[2618]: I0129 11:18:11.967019 2618 scope.go:117] "RemoveContainer" containerID="b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815" Jan 29 11:18:11.968616 containerd[1448]: time="2025-01-29T11:18:11.968589813Z" level=info msg="RemoveContainer for \"b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815\"" Jan 29 11:18:11.970754 containerd[1448]: time="2025-01-29T11:18:11.970720812Z" level=info msg="RemoveContainer for \"b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815\" returns successfully" Jan 29 11:18:11.971021 kubelet[2618]: I0129 11:18:11.970987 2618 scope.go:117] "RemoveContainer" containerID="293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e" Jan 29 11:18:11.971902 containerd[1448]: time="2025-01-29T11:18:11.971866931Z" level=info msg="RemoveContainer for \"293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e\"" Jan 29 11:18:11.974077 containerd[1448]: time="2025-01-29T11:18:11.974043650Z" level=info msg="RemoveContainer for \"293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e\" returns successfully" Jan 29 11:18:11.974239 kubelet[2618]: I0129 11:18:11.974211 2618 scope.go:117] "RemoveContainer" containerID="69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007" Jan 29 11:18:11.975398 containerd[1448]: time="2025-01-29T11:18:11.975179890Z" level=info msg="RemoveContainer for \"69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007\"" Jan 29 11:18:11.978261 containerd[1448]: time="2025-01-29T11:18:11.978235489Z" level=info msg="RemoveContainer for \"69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007\" returns successfully" Jan 29 11:18:11.978552 kubelet[2618]: I0129 11:18:11.978524 2618 scope.go:117] "RemoveContainer" containerID="6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19" Jan 29 11:18:11.979071 containerd[1448]: time="2025-01-29T11:18:11.979030368Z" level=error msg="ContainerStatus for \"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19\": not found" Jan 29 11:18:11.979202 kubelet[2618]: E0129 11:18:11.979182 2618 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19\": not found" containerID="6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19" Jan 29 11:18:11.979233 kubelet[2618]: I0129 11:18:11.979209 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19"} err="failed to get container status \"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19\": rpc error: code = NotFound desc = an error occurred when try to find container \"6fd7d9217ea9de7399e82a27b8abea2b08ae3fb01abbcdea3ed2475ab753cf19\": not found" Jan 29 11:18:11.979233 kubelet[2618]: I0129 11:18:11.979228 2618 scope.go:117] "RemoveContainer" containerID="a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda" Jan 29 11:18:11.979438 containerd[1448]: time="2025-01-29T11:18:11.979392488Z" level=error msg="ContainerStatus for \"a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda\": not found" Jan 29 11:18:11.979545 kubelet[2618]: E0129 11:18:11.979522 2618 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda\": not found" containerID="a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda" Jan 29 11:18:11.979583 kubelet[2618]: I0129 11:18:11.979550 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda"} err="failed to get container status \"a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda\": rpc error: code = NotFound desc = an error occurred when try to find container \"a309bb8d09b1869a6c3586b03eed1987e6e575a78c2a9a75bb7fb478891c7bda\": not found" Jan 29 11:18:11.979583 kubelet[2618]: I0129 11:18:11.979571 2618 scope.go:117] "RemoveContainer" containerID="b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815" Jan 29 11:18:11.979780 containerd[1448]: time="2025-01-29T11:18:11.979739088Z" level=error msg="ContainerStatus for \"b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815\": not found" Jan 29 11:18:11.979883 kubelet[2618]: E0129 11:18:11.979864 2618 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815\": not found" containerID="b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815" Jan 29 11:18:11.979922 kubelet[2618]: I0129 11:18:11.979890 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815"} err="failed to get container status \"b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815\": rpc error: code = NotFound desc = an error occurred when try to find container \"b04f38c1c016e38ebecc8eb0f816ba3767d03d18e6a8838f0c9b6eea9ade0815\": not found" Jan 29 11:18:11.979922 kubelet[2618]: I0129 11:18:11.979909 2618 scope.go:117] "RemoveContainer" containerID="293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e" Jan 29 11:18:11.980081 containerd[1448]: time="2025-01-29T11:18:11.980056128Z" level=error msg="ContainerStatus for \"293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e\": not found" Jan 29 11:18:11.980163 kubelet[2618]: E0129 11:18:11.980145 2618 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e\": not found" containerID="293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e" Jan 29 11:18:11.980200 kubelet[2618]: I0129 11:18:11.980166 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e"} err="failed to get container status \"293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e\": rpc error: code = NotFound desc = an error occurred when try to find container \"293cbc3eb20946c878cf603679853936603f7056acdd84cbaf010cb18a3f4c8e\": not found" Jan 29 11:18:11.980200 kubelet[2618]: I0129 11:18:11.980178 2618 scope.go:117] "RemoveContainer" containerID="69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007" Jan 29 11:18:11.980363 containerd[1448]: time="2025-01-29T11:18:11.980316208Z" level=error msg="ContainerStatus for \"69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007\": not found" Jan 29 11:18:11.980456 kubelet[2618]: E0129 11:18:11.980436 2618 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007\": not found" containerID="69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007" Jan 29 11:18:11.980508 kubelet[2618]: I0129 11:18:11.980461 2618 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007"} err="failed to get container status \"69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007\": rpc error: code = NotFound desc = an error occurred when try to find container \"69b29d79a156d972bfcbe1850cab4e2b4580bb79c04a5b0773f8d27887575007\": not found" Jan 29 11:18:12.200788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b6c4a73545a48cb28cde13beae286d1a43e17775a5211f1d63a0f97392ccbc1-rootfs.mount: Deactivated successfully. Jan 29 11:18:12.200881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842-rootfs.mount: Deactivated successfully. Jan 29 11:18:12.200930 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c851d9a3bb8b9cf1bc2b191a95ce110e905f4eaf710992a92036b3dfc542e842-shm.mount: Deactivated successfully. Jan 29 11:18:12.200984 systemd[1]: var-lib-kubelet-pods-77aff757\x2db770\x2d456a\x2d955f\x2d1126ffd22913-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbfbnw.mount: Deactivated successfully. Jan 29 11:18:12.201038 systemd[1]: var-lib-kubelet-pods-74f526b7\x2d4f68\x2d4729\x2d95b8\x2d107417cf2ba3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpwrx2.mount: Deactivated successfully. Jan 29 11:18:12.201084 systemd[1]: var-lib-kubelet-pods-74f526b7\x2d4f68\x2d4729\x2d95b8\x2d107417cf2ba3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:18:12.201130 systemd[1]: var-lib-kubelet-pods-74f526b7\x2d4f68\x2d4729\x2d95b8\x2d107417cf2ba3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:18:12.749619 kubelet[2618]: I0129 11:18:12.749576 2618 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74f526b7-4f68-4729-95b8-107417cf2ba3" path="/var/lib/kubelet/pods/74f526b7-4f68-4729-95b8-107417cf2ba3/volumes" Jan 29 11:18:12.750127 kubelet[2618]: I0129 11:18:12.750105 2618 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77aff757-b770-456a-955f-1126ffd22913" path="/var/lib/kubelet/pods/77aff757-b770-456a-955f-1126ffd22913/volumes" Jan 29 11:18:13.140547 sshd[4260]: Connection closed by 10.0.0.1 port 40406 Jan 29 11:18:13.141066 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Jan 29 11:18:13.147811 systemd[1]: sshd@22-10.0.0.135:22-10.0.0.1:40406.service: Deactivated successfully. Jan 29 11:18:13.149380 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:18:13.149632 systemd[1]: session-23.scope: Consumed 1.222s CPU time. Jan 29 11:18:13.150632 systemd-logind[1434]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:18:13.156821 systemd[1]: Started sshd@23-10.0.0.135:22-10.0.0.1:46644.service - OpenSSH per-connection server daemon (10.0.0.1:46644). Jan 29 11:18:13.157740 systemd-logind[1434]: Removed session 23. Jan 29 11:18:13.198861 sshd[4418]: Accepted publickey for core from 10.0.0.1 port 46644 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:18:13.199958 sshd-session[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:18:13.203208 systemd-logind[1434]: New session 24 of user core. Jan 29 11:18:13.210539 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:18:13.801031 kubelet[2618]: E0129 11:18:13.800990 2618 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:18:14.366726 sshd[4420]: Connection closed by 10.0.0.1 port 46644 Jan 29 11:18:14.367517 sshd-session[4418]: pam_unix(sshd:session): session closed for user core Jan 29 11:18:14.378203 systemd[1]: sshd@23-10.0.0.135:22-10.0.0.1:46644.service: Deactivated successfully. Jan 29 11:18:14.380938 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:18:14.381093 systemd[1]: session-24.scope: Consumed 1.081s CPU time. Jan 29 11:18:14.383860 kubelet[2618]: I0129 11:18:14.383028 2618 topology_manager.go:215] "Topology Admit Handler" podUID="1cb2cf81-2f29-4713-bc65-f3ea01ca0b20" podNamespace="kube-system" podName="cilium-gw77c" Jan 29 11:18:14.384548 kubelet[2618]: E0129 11:18:14.383947 2618 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74f526b7-4f68-4729-95b8-107417cf2ba3" containerName="mount-cgroup" Jan 29 11:18:14.384548 kubelet[2618]: E0129 11:18:14.383974 2618 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74f526b7-4f68-4729-95b8-107417cf2ba3" containerName="clean-cilium-state" Jan 29 11:18:14.384548 kubelet[2618]: E0129 11:18:14.383983 2618 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74f526b7-4f68-4729-95b8-107417cf2ba3" containerName="cilium-agent" Jan 29 11:18:14.384548 kubelet[2618]: E0129 11:18:14.383989 2618 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77aff757-b770-456a-955f-1126ffd22913" containerName="cilium-operator" Jan 29 11:18:14.384548 kubelet[2618]: E0129 11:18:14.383995 2618 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74f526b7-4f68-4729-95b8-107417cf2ba3" containerName="mount-bpf-fs" Jan 29 11:18:14.384548 kubelet[2618]: E0129 11:18:14.384002 2618 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74f526b7-4f68-4729-95b8-107417cf2ba3" containerName="apply-sysctl-overwrites" Jan 29 11:18:14.384548 kubelet[2618]: I0129 11:18:14.384041 2618 memory_manager.go:354] "RemoveStaleState removing state" podUID="74f526b7-4f68-4729-95b8-107417cf2ba3" containerName="cilium-agent" Jan 29 11:18:14.384548 kubelet[2618]: I0129 11:18:14.384052 2618 memory_manager.go:354] "RemoveStaleState removing state" podUID="77aff757-b770-456a-955f-1126ffd22913" containerName="cilium-operator" Jan 29 11:18:14.384599 systemd-logind[1434]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:18:14.393955 systemd[1]: Started sshd@24-10.0.0.135:22-10.0.0.1:46660.service - OpenSSH per-connection server daemon (10.0.0.1:46660). Jan 29 11:18:14.395148 systemd-logind[1434]: Removed session 24. Jan 29 11:18:14.411525 systemd[1]: Created slice kubepods-burstable-pod1cb2cf81_2f29_4713_bc65_f3ea01ca0b20.slice - libcontainer container kubepods-burstable-pod1cb2cf81_2f29_4713_bc65_f3ea01ca0b20.slice. Jan 29 11:18:14.437455 kubelet[2618]: I0129 11:18:14.437200 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfw4z\" (UniqueName: \"kubernetes.io/projected/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-kube-api-access-gfw4z\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.437455 kubelet[2618]: I0129 11:18:14.437239 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-cilium-cgroup\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.437455 kubelet[2618]: I0129 11:18:14.437258 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-cni-path\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.437455 kubelet[2618]: I0129 11:18:14.437274 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-xtables-lock\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.437455 kubelet[2618]: I0129 11:18:14.437289 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-cilium-ipsec-secrets\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.437684 kubelet[2618]: I0129 11:18:14.437304 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-host-proc-sys-kernel\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.437684 kubelet[2618]: I0129 11:18:14.437319 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-hostproc\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.437684 kubelet[2618]: I0129 11:18:14.437334 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-clustermesh-secrets\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.437684 kubelet[2618]: I0129 11:18:14.437351 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-host-proc-sys-net\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.437684 kubelet[2618]: I0129 11:18:14.437368 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-etc-cni-netd\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.437791 kubelet[2618]: I0129 11:18:14.437385 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-cilium-config-path\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.437791 kubelet[2618]: I0129 11:18:14.437399 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-hubble-tls\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.437791 kubelet[2618]: I0129 11:18:14.437604 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-lib-modules\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.437791 kubelet[2618]: I0129 11:18:14.437632 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-cilium-run\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.437791 kubelet[2618]: I0129 11:18:14.437652 2618 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1cb2cf81-2f29-4713-bc65-f3ea01ca0b20-bpf-maps\") pod \"cilium-gw77c\" (UID: \"1cb2cf81-2f29-4713-bc65-f3ea01ca0b20\") " pod="kube-system/cilium-gw77c" Jan 29 11:18:14.451111 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 46660 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:18:14.452349 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:18:14.456166 systemd-logind[1434]: New session 25 of user core. Jan 29 11:18:14.470582 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:18:14.520448 sshd[4433]: Connection closed by 10.0.0.1 port 46660 Jan 29 11:18:14.520584 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Jan 29 11:18:14.527803 systemd[1]: sshd@24-10.0.0.135:22-10.0.0.1:46660.service: Deactivated successfully. Jan 29 11:18:14.530710 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:18:14.532238 systemd-logind[1434]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:18:14.540076 systemd[1]: Started sshd@25-10.0.0.135:22-10.0.0.1:46670.service - OpenSSH per-connection server daemon (10.0.0.1:46670). Jan 29 11:18:14.554488 systemd-logind[1434]: Removed session 25. Jan 29 11:18:14.584473 sshd[4439]: Accepted publickey for core from 10.0.0.1 port 46670 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:18:14.586093 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:18:14.590493 systemd-logind[1434]: New session 26 of user core. Jan 29 11:18:14.601574 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:18:14.716010 kubelet[2618]: E0129 11:18:14.715889 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:18:14.716792 containerd[1448]: time="2025-01-29T11:18:14.716380318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gw77c,Uid:1cb2cf81-2f29-4713-bc65-f3ea01ca0b20,Namespace:kube-system,Attempt:0,}" Jan 29 11:18:14.737168 containerd[1448]: time="2025-01-29T11:18:14.737082430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:18:14.737168 containerd[1448]: time="2025-01-29T11:18:14.737137790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:18:14.737168 containerd[1448]: time="2025-01-29T11:18:14.737154230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:18:14.737822 containerd[1448]: time="2025-01-29T11:18:14.737786750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:18:14.751637 kubelet[2618]: E0129 11:18:14.751444 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:18:14.753593 systemd[1]: Started cri-containerd-fe582ecf8b582ac3155a635c054ccf79778ff304ee0210bb1bb299238e80743c.scope - libcontainer container fe582ecf8b582ac3155a635c054ccf79778ff304ee0210bb1bb299238e80743c. Jan 29 11:18:14.772972 containerd[1448]: time="2025-01-29T11:18:14.772912775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gw77c,Uid:1cb2cf81-2f29-4713-bc65-f3ea01ca0b20,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe582ecf8b582ac3155a635c054ccf79778ff304ee0210bb1bb299238e80743c\"" Jan 29 11:18:14.773831 kubelet[2618]: E0129 11:18:14.773807 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:18:14.775677 containerd[1448]: time="2025-01-29T11:18:14.775649254Z" level=info msg="CreateContainer within sandbox \"fe582ecf8b582ac3155a635c054ccf79778ff304ee0210bb1bb299238e80743c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:18:14.786193 containerd[1448]: time="2025-01-29T11:18:14.786132370Z" level=info msg="CreateContainer within sandbox \"fe582ecf8b582ac3155a635c054ccf79778ff304ee0210bb1bb299238e80743c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"07b17cd150f20d82399ed7eae254c38a434e185bdc6f820647956dfb84ed16f2\"" Jan 29 11:18:14.791366 containerd[1448]: time="2025-01-29T11:18:14.791299568Z" level=info msg="StartContainer for \"07b17cd150f20d82399ed7eae254c38a434e185bdc6f820647956dfb84ed16f2\"" Jan 29 11:18:14.820583 systemd[1]: Started cri-containerd-07b17cd150f20d82399ed7eae254c38a434e185bdc6f820647956dfb84ed16f2.scope - libcontainer container 07b17cd150f20d82399ed7eae254c38a434e185bdc6f820647956dfb84ed16f2. Jan 29 11:18:14.841833 containerd[1448]: time="2025-01-29T11:18:14.841783908Z" level=info msg="StartContainer for \"07b17cd150f20d82399ed7eae254c38a434e185bdc6f820647956dfb84ed16f2\" returns successfully" Jan 29 11:18:14.876314 systemd[1]: cri-containerd-07b17cd150f20d82399ed7eae254c38a434e185bdc6f820647956dfb84ed16f2.scope: Deactivated successfully. Jan 29 11:18:14.903769 containerd[1448]: time="2025-01-29T11:18:14.903564243Z" level=info msg="shim disconnected" id=07b17cd150f20d82399ed7eae254c38a434e185bdc6f820647956dfb84ed16f2 namespace=k8s.io Jan 29 11:18:14.903769 containerd[1448]: time="2025-01-29T11:18:14.903621843Z" level=warning msg="cleaning up after shim disconnected" id=07b17cd150f20d82399ed7eae254c38a434e185bdc6f820647956dfb84ed16f2 namespace=k8s.io Jan 29 11:18:14.903769 containerd[1448]: time="2025-01-29T11:18:14.903630643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:18:14.956301 kubelet[2618]: E0129 11:18:14.956148 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:18:14.958168 containerd[1448]: time="2025-01-29T11:18:14.958058341Z" level=info msg="CreateContainer within sandbox \"fe582ecf8b582ac3155a635c054ccf79778ff304ee0210bb1bb299238e80743c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:18:14.973856 containerd[1448]: time="2025-01-29T11:18:14.973711455Z" level=info msg="CreateContainer within sandbox \"fe582ecf8b582ac3155a635c054ccf79778ff304ee0210bb1bb299238e80743c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b96ff3d84fb5cc20ea8d1f25ff6b7defc9ee63c9f3423967772977a03a2608f8\"" Jan 29 11:18:14.974213 containerd[1448]: time="2025-01-29T11:18:14.974069775Z" level=info msg="StartContainer for \"b96ff3d84fb5cc20ea8d1f25ff6b7defc9ee63c9f3423967772977a03a2608f8\"" Jan 29 11:18:14.999559 systemd[1]: Started cri-containerd-b96ff3d84fb5cc20ea8d1f25ff6b7defc9ee63c9f3423967772977a03a2608f8.scope - libcontainer container b96ff3d84fb5cc20ea8d1f25ff6b7defc9ee63c9f3423967772977a03a2608f8. Jan 29 11:18:15.019649 containerd[1448]: time="2025-01-29T11:18:15.019530597Z" level=info msg="StartContainer for \"b96ff3d84fb5cc20ea8d1f25ff6b7defc9ee63c9f3423967772977a03a2608f8\" returns successfully" Jan 29 11:18:15.028008 systemd[1]: cri-containerd-b96ff3d84fb5cc20ea8d1f25ff6b7defc9ee63c9f3423967772977a03a2608f8.scope: Deactivated successfully. Jan 29 11:18:15.052056 containerd[1448]: time="2025-01-29T11:18:15.051996744Z" level=info msg="shim disconnected" id=b96ff3d84fb5cc20ea8d1f25ff6b7defc9ee63c9f3423967772977a03a2608f8 namespace=k8s.io Jan 29 11:18:15.052056 containerd[1448]: time="2025-01-29T11:18:15.052052344Z" level=warning msg="cleaning up after shim disconnected" id=b96ff3d84fb5cc20ea8d1f25ff6b7defc9ee63c9f3423967772977a03a2608f8 namespace=k8s.io Jan 29 11:18:15.052056 containerd[1448]: time="2025-01-29T11:18:15.052063744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:18:15.747384 kubelet[2618]: E0129 11:18:15.747302 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:18:15.959838 kubelet[2618]: E0129 11:18:15.959810 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:18:15.962548 containerd[1448]: time="2025-01-29T11:18:15.962494429Z" level=info msg="CreateContainer within sandbox \"fe582ecf8b582ac3155a635c054ccf79778ff304ee0210bb1bb299238e80743c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:18:15.974945 containerd[1448]: time="2025-01-29T11:18:15.974901865Z" level=info msg="CreateContainer within sandbox \"fe582ecf8b582ac3155a635c054ccf79778ff304ee0210bb1bb299238e80743c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6488efa046c2cf648085d14577388227440c8615a9bd013f8b58e986b21559f1\"" Jan 29 11:18:15.975299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3158231697.mount: Deactivated successfully. Jan 29 11:18:15.975691 containerd[1448]: time="2025-01-29T11:18:15.975570264Z" level=info msg="StartContainer for \"6488efa046c2cf648085d14577388227440c8615a9bd013f8b58e986b21559f1\"" Jan 29 11:18:16.008594 systemd[1]: Started cri-containerd-6488efa046c2cf648085d14577388227440c8615a9bd013f8b58e986b21559f1.scope - libcontainer container 6488efa046c2cf648085d14577388227440c8615a9bd013f8b58e986b21559f1. Jan 29 11:18:16.036125 systemd[1]: cri-containerd-6488efa046c2cf648085d14577388227440c8615a9bd013f8b58e986b21559f1.scope: Deactivated successfully. Jan 29 11:18:16.041696 containerd[1448]: time="2025-01-29T11:18:16.041600639Z" level=info msg="StartContainer for \"6488efa046c2cf648085d14577388227440c8615a9bd013f8b58e986b21559f1\" returns successfully" Jan 29 11:18:16.064860 containerd[1448]: time="2025-01-29T11:18:16.064622910Z" level=info msg="shim disconnected" id=6488efa046c2cf648085d14577388227440c8615a9bd013f8b58e986b21559f1 namespace=k8s.io Jan 29 11:18:16.064860 containerd[1448]: time="2025-01-29T11:18:16.064676230Z" level=warning msg="cleaning up after shim disconnected" id=6488efa046c2cf648085d14577388227440c8615a9bd013f8b58e986b21559f1 namespace=k8s.io Jan 29 11:18:16.064860 containerd[1448]: time="2025-01-29T11:18:16.064684110Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:18:16.075871 containerd[1448]: time="2025-01-29T11:18:16.074781106Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:18:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:18:16.541817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6488efa046c2cf648085d14577388227440c8615a9bd013f8b58e986b21559f1-rootfs.mount: Deactivated successfully. Jan 29 11:18:16.963427 kubelet[2618]: E0129 11:18:16.963362 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:18:16.966617 containerd[1448]: time="2025-01-29T11:18:16.966460968Z" level=info msg="CreateContainer within sandbox \"fe582ecf8b582ac3155a635c054ccf79778ff304ee0210bb1bb299238e80743c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:18:16.977203 containerd[1448]: time="2025-01-29T11:18:16.977153364Z" level=info msg="CreateContainer within sandbox \"fe582ecf8b582ac3155a635c054ccf79778ff304ee0210bb1bb299238e80743c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f414597b43674b6e0a4ebd6d1aaec37ec8a6a306d8278f20ea0bd1d1ae8468a4\"" Jan 29 11:18:16.977773 containerd[1448]: time="2025-01-29T11:18:16.977748524Z" level=info msg="StartContainer for \"f414597b43674b6e0a4ebd6d1aaec37ec8a6a306d8278f20ea0bd1d1ae8468a4\"" Jan 29 11:18:17.007573 systemd[1]: Started cri-containerd-f414597b43674b6e0a4ebd6d1aaec37ec8a6a306d8278f20ea0bd1d1ae8468a4.scope - libcontainer container f414597b43674b6e0a4ebd6d1aaec37ec8a6a306d8278f20ea0bd1d1ae8468a4. Jan 29 11:18:17.026100 systemd[1]: cri-containerd-f414597b43674b6e0a4ebd6d1aaec37ec8a6a306d8278f20ea0bd1d1ae8468a4.scope: Deactivated successfully. Jan 29 11:18:17.027969 containerd[1448]: time="2025-01-29T11:18:17.027929385Z" level=info msg="StartContainer for \"f414597b43674b6e0a4ebd6d1aaec37ec8a6a306d8278f20ea0bd1d1ae8468a4\" returns successfully" Jan 29 11:18:17.047995 containerd[1448]: time="2025-01-29T11:18:17.047800778Z" level=info msg="shim disconnected" id=f414597b43674b6e0a4ebd6d1aaec37ec8a6a306d8278f20ea0bd1d1ae8468a4 namespace=k8s.io Jan 29 11:18:17.047995 containerd[1448]: time="2025-01-29T11:18:17.047854978Z" level=warning msg="cleaning up after shim disconnected" id=f414597b43674b6e0a4ebd6d1aaec37ec8a6a306d8278f20ea0bd1d1ae8468a4 namespace=k8s.io Jan 29 11:18:17.047995 containerd[1448]: time="2025-01-29T11:18:17.047863578Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:18:17.541815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f414597b43674b6e0a4ebd6d1aaec37ec8a6a306d8278f20ea0bd1d1ae8468a4-rootfs.mount: Deactivated successfully. Jan 29 11:18:17.967086 kubelet[2618]: E0129 11:18:17.966994 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:18:17.969557 containerd[1448]: time="2025-01-29T11:18:17.969510357Z" level=info msg="CreateContainer within sandbox \"fe582ecf8b582ac3155a635c054ccf79778ff304ee0210bb1bb299238e80743c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:18:17.981124 containerd[1448]: time="2025-01-29T11:18:17.980947473Z" level=info msg="CreateContainer within sandbox \"fe582ecf8b582ac3155a635c054ccf79778ff304ee0210bb1bb299238e80743c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d0121f7dd2ebc013815ae4e25b8d07f0fdcd8006f5f2884a3030ee7cec5ae27b\"" Jan 29 11:18:17.981816 containerd[1448]: time="2025-01-29T11:18:17.981769273Z" level=info msg="StartContainer for \"d0121f7dd2ebc013815ae4e25b8d07f0fdcd8006f5f2884a3030ee7cec5ae27b\"" Jan 29 11:18:18.008590 systemd[1]: Started cri-containerd-d0121f7dd2ebc013815ae4e25b8d07f0fdcd8006f5f2884a3030ee7cec5ae27b.scope - libcontainer container d0121f7dd2ebc013815ae4e25b8d07f0fdcd8006f5f2884a3030ee7cec5ae27b. Jan 29 11:18:18.031101 containerd[1448]: time="2025-01-29T11:18:18.031037255Z" level=info msg="StartContainer for \"d0121f7dd2ebc013815ae4e25b8d07f0fdcd8006f5f2884a3030ee7cec5ae27b\" returns successfully" Jan 29 11:18:18.278442 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 11:18:18.749889 kubelet[2618]: E0129 11:18:18.748826 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:18:18.971781 kubelet[2618]: E0129 11:18:18.971745 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:18:20.717789 kubelet[2618]: E0129 11:18:20.717715 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:18:21.009664 systemd-networkd[1387]: lxc_health: Link UP Jan 29 11:18:21.018101 systemd-networkd[1387]: lxc_health: Gained carrier Jan 29 11:18:22.719458 kubelet[2618]: E0129 11:18:22.719402 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:18:22.738151 kubelet[2618]: I0129 11:18:22.737834 2618 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gw77c" podStartSLOduration=8.737820044 podStartE2EDuration="8.737820044s" podCreationTimestamp="2025-01-29 11:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:18:18.986239152 +0000 UTC m=+80.306855562" watchObservedRunningTime="2025-01-29 11:18:22.737820044 +0000 UTC m=+84.058436454" Jan 29 11:18:22.741558 systemd-networkd[1387]: lxc_health: Gained IPv6LL Jan 29 11:18:22.988275 kubelet[2618]: E0129 11:18:22.988156 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:18:23.060941 kubelet[2618]: E0129 11:18:23.060898 2618 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59230->127.0.0.1:42687: write tcp 127.0.0.1:59230->127.0.0.1:42687: write: broken pipe Jan 29 11:18:23.989794 kubelet[2618]: E0129 11:18:23.989742 2618 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:18:27.279842 sshd[4445]: Connection closed by 10.0.0.1 port 46670 Jan 29 11:18:27.280604 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Jan 29 11:18:27.283805 systemd[1]: sshd@25-10.0.0.135:22-10.0.0.1:46670.service: Deactivated successfully. Jan 29 11:18:27.285520 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:18:27.286147 systemd-logind[1434]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:18:27.287025 systemd-logind[1434]: Removed session 26.