Jan 13 20:22:04.951473 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:22:04.951496 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:22:04.951506 kernel: KASLR enabled Jan 13 20:22:04.951512 kernel: efi: EFI v2.7 by EDK II Jan 13 20:22:04.951518 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 13 20:22:04.951523 kernel: random: crng init done Jan 13 20:22:04.951530 kernel: secureboot: Secure boot disabled Jan 13 20:22:04.951536 kernel: ACPI: Early table checksum verification disabled Jan 13 20:22:04.951542 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 13 20:22:04.951550 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:22:04.951556 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:04.951562 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:04.951568 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:04.951574 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:04.951581 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:04.951589 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:04.951595 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:04.951601 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:04.951608 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:04.951614 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 20:22:04.951620 kernel: NUMA: Failed to initialise from firmware Jan 13 20:22:04.951626 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:22:04.951633 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jan 13 20:22:04.951639 kernel: Zone ranges: Jan 13 20:22:04.951645 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:22:04.951653 kernel: DMA32 empty Jan 13 20:22:04.951659 kernel: Normal empty Jan 13 20:22:04.951666 kernel: Movable zone start for each node Jan 13 20:22:04.951672 kernel: Early memory node ranges Jan 13 20:22:04.951678 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 20:22:04.951684 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 20:22:04.951691 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 20:22:04.951697 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 20:22:04.951703 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 20:22:04.951710 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 20:22:04.951716 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 20:22:04.951723 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:22:04.951730 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 20:22:04.951736 kernel: psci: probing for conduit method from ACPI. Jan 13 20:22:04.951743 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:22:04.951751 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:22:04.951758 kernel: psci: Trusted OS migration not required Jan 13 20:22:04.951764 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:22:04.951773 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:22:04.951780 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:22:04.951786 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:22:04.951793 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 20:22:04.951800 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:22:04.951806 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:22:04.951813 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:22:04.951820 kernel: CPU features: detected: Spectre-v4 Jan 13 20:22:04.951826 kernel: CPU features: detected: Spectre-BHB Jan 13 20:22:04.951833 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:22:04.951840 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:22:04.951847 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:22:04.951854 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:22:04.951860 kernel: alternatives: applying boot alternatives Jan 13 20:22:04.951868 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:22:04.951875 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:22:04.951882 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:22:04.951889 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:22:04.951895 kernel: Fallback order for Node 0: 0 Jan 13 20:22:04.951902 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 20:22:04.951908 kernel: Policy zone: DMA Jan 13 20:22:04.951917 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:22:04.951923 kernel: software IO TLB: area num 4. Jan 13 20:22:04.951930 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 20:22:04.951937 kernel: Memory: 2386320K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185968K reserved, 0K cma-reserved) Jan 13 20:22:04.951943 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:22:04.951950 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:22:04.951957 kernel: rcu: RCU event tracing is enabled. Jan 13 20:22:04.951964 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:22:04.951971 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:22:04.951978 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:22:04.951984 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:22:04.951991 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:22:04.951999 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:22:04.952006 kernel: GICv3: 256 SPIs implemented Jan 13 20:22:04.952012 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:22:04.952019 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:22:04.952026 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:22:04.952033 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:22:04.952039 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:22:04.952046 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:22:04.952053 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:22:04.952076 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 20:22:04.952083 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 20:22:04.952092 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:22:04.952099 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:22:04.952106 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:22:04.952113 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:22:04.952120 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:22:04.952127 kernel: arm-pv: using stolen time PV Jan 13 20:22:04.952134 kernel: Console: colour dummy device 80x25 Jan 13 20:22:04.952141 kernel: ACPI: Core revision 20230628 Jan 13 20:22:04.952148 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:22:04.952155 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:22:04.952163 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:22:04.952170 kernel: landlock: Up and running. Jan 13 20:22:04.952177 kernel: SELinux: Initializing. Jan 13 20:22:04.952190 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:22:04.952197 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:22:04.952204 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:22:04.952211 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:22:04.952218 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:22:04.952225 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:22:04.952234 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:22:04.952241 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:22:04.952247 kernel: Remapping and enabling EFI services. Jan 13 20:22:04.952254 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:22:04.952261 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:22:04.952268 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:22:04.952275 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 20:22:04.952282 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:22:04.952289 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:22:04.952296 kernel: Detected PIPT I-cache on CPU2 Jan 13 20:22:04.952304 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 20:22:04.952311 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 20:22:04.952323 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:22:04.952331 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 20:22:04.952339 kernel: Detected PIPT I-cache on CPU3 Jan 13 20:22:04.952346 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 20:22:04.952353 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 20:22:04.952360 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:22:04.952367 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 20:22:04.952376 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:22:04.952383 kernel: SMP: Total of 4 processors activated. Jan 13 20:22:04.952390 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:22:04.952398 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:22:04.952405 kernel: CPU features: detected: Common not Private translations Jan 13 20:22:04.952412 kernel: CPU features: detected: CRC32 instructions Jan 13 20:22:04.952420 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:22:04.952427 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:22:04.952436 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:22:04.952443 kernel: CPU features: detected: Privileged Access Never Jan 13 20:22:04.952450 kernel: CPU features: detected: RAS Extension Support Jan 13 20:22:04.952457 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:22:04.952464 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:22:04.952472 kernel: alternatives: applying system-wide alternatives Jan 13 20:22:04.952479 kernel: devtmpfs: initialized Jan 13 20:22:04.952486 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:22:04.952494 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:22:04.952502 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:22:04.952509 kernel: SMBIOS 3.0.0 present. Jan 13 20:22:04.952517 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 13 20:22:04.952524 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:22:04.952531 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:22:04.952538 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:22:04.952546 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:22:04.952553 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:22:04.952560 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 13 20:22:04.952569 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:22:04.952576 kernel: cpuidle: using governor menu Jan 13 20:22:04.952583 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:22:04.952590 kernel: ASID allocator initialised with 32768 entries Jan 13 20:22:04.952598 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:22:04.952605 kernel: Serial: AMBA PL011 UART driver Jan 13 20:22:04.952613 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:22:04.952620 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:22:04.952627 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:22:04.952635 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:22:04.952643 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:22:04.952650 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:22:04.952657 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:22:04.952665 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:22:04.952672 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:22:04.952679 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:22:04.952686 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:22:04.952693 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:22:04.952702 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:22:04.952709 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:22:04.952716 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:22:04.952723 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:22:04.952731 kernel: ACPI: Interpreter enabled Jan 13 20:22:04.952738 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:22:04.952745 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:22:04.952752 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:22:04.952759 kernel: printk: console [ttyAMA0] enabled Jan 13 20:22:04.952768 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:22:04.952911 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:22:04.952987 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:22:04.953054 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:22:04.953143 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:22:04.953221 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:22:04.953232 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:22:04.953243 kernel: PCI host bridge to bus 0000:00 Jan 13 20:22:04.953316 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:22:04.953376 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:22:04.953435 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:22:04.953504 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:22:04.953584 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:22:04.953659 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:22:04.953731 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 20:22:04.953797 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 20:22:04.953862 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:22:04.953928 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:22:04.953994 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 20:22:04.954081 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 20:22:04.954158 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:22:04.954227 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:22:04.954286 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:22:04.954296 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:22:04.954304 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:22:04.954311 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:22:04.954318 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:22:04.954325 kernel: iommu: Default domain type: Translated Jan 13 20:22:04.954333 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:22:04.954342 kernel: efivars: Registered efivars operations Jan 13 20:22:04.954349 kernel: vgaarb: loaded Jan 13 20:22:04.954357 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:22:04.954364 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:22:04.954371 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:22:04.954379 kernel: pnp: PnP ACPI init Jan 13 20:22:04.954451 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:22:04.954461 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:22:04.954471 kernel: NET: Registered PF_INET protocol family Jan 13 20:22:04.954479 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:22:04.954486 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:22:04.954493 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:22:04.954501 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:22:04.954509 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:22:04.954516 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:22:04.954523 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:22:04.954531 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:22:04.954539 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:22:04.954547 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:22:04.954554 kernel: kvm [1]: HYP mode not available Jan 13 20:22:04.954561 kernel: Initialise system trusted keyrings Jan 13 20:22:04.954568 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:22:04.954576 kernel: Key type asymmetric registered Jan 13 20:22:04.954583 kernel: Asymmetric key parser 'x509' registered Jan 13 20:22:04.954591 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:22:04.954598 kernel: io scheduler mq-deadline registered Jan 13 20:22:04.954607 kernel: io scheduler kyber registered Jan 13 20:22:04.954614 kernel: io scheduler bfq registered Jan 13 20:22:04.954621 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:22:04.954629 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:22:04.954637 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:22:04.954703 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 20:22:04.954714 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:22:04.954721 kernel: thunder_xcv, ver 1.0 Jan 13 20:22:04.954729 kernel: thunder_bgx, ver 1.0 Jan 13 20:22:04.954737 kernel: nicpf, ver 1.0 Jan 13 20:22:04.954745 kernel: nicvf, ver 1.0 Jan 13 20:22:04.954823 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:22:04.954886 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:22:04 UTC (1736799724) Jan 13 20:22:04.954896 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:22:04.954904 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:22:04.954911 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:22:04.954918 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:22:04.954928 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:22:04.954935 kernel: Segment Routing with IPv6 Jan 13 20:22:04.954942 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:22:04.954949 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:22:04.954957 kernel: Key type dns_resolver registered Jan 13 20:22:04.954964 kernel: registered taskstats version 1 Jan 13 20:22:04.954971 kernel: Loading compiled-in X.509 certificates Jan 13 20:22:04.954979 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:22:04.954986 kernel: Key type .fscrypt registered Jan 13 20:22:04.954994 kernel: Key type fscrypt-provisioning registered Jan 13 20:22:04.955002 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:22:04.955009 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:22:04.955016 kernel: ima: No architecture policies found Jan 13 20:22:04.955023 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:22:04.955031 kernel: clk: Disabling unused clocks Jan 13 20:22:04.955038 kernel: Freeing unused kernel memory: 39680K Jan 13 20:22:04.955045 kernel: Run /init as init process Jan 13 20:22:04.955052 kernel: with arguments: Jan 13 20:22:04.955153 kernel: /init Jan 13 20:22:04.955161 kernel: with environment: Jan 13 20:22:04.955168 kernel: HOME=/ Jan 13 20:22:04.955176 kernel: TERM=linux Jan 13 20:22:04.955191 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:22:04.955200 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:22:04.955210 systemd[1]: Detected virtualization kvm. Jan 13 20:22:04.955218 systemd[1]: Detected architecture arm64. Jan 13 20:22:04.955228 systemd[1]: Running in initrd. Jan 13 20:22:04.955236 systemd[1]: No hostname configured, using default hostname. Jan 13 20:22:04.955243 systemd[1]: Hostname set to . Jan 13 20:22:04.955251 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:22:04.955259 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:22:04.955267 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:22:04.955275 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:22:04.955283 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:22:04.955292 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:22:04.955300 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:22:04.955308 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:22:04.955318 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:22:04.955326 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:22:04.955334 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:22:04.955342 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:22:04.955352 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:22:04.955359 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:22:04.955367 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:22:04.955375 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:22:04.955383 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:22:04.955391 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:22:04.955399 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:22:04.955406 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:22:04.955416 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:22:04.955424 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:22:04.955432 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:22:04.955440 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:22:04.955448 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:22:04.955456 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:22:04.955464 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:22:04.955472 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:22:04.955480 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:22:04.955489 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:22:04.955497 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:04.955505 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:22:04.955514 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:22:04.955522 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:22:04.955530 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:22:04.955540 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:04.955569 systemd-journald[239]: Collecting audit messages is disabled. Jan 13 20:22:04.955591 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:22:04.955600 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:22:04.955609 systemd-journald[239]: Journal started Jan 13 20:22:04.955627 systemd-journald[239]: Runtime Journal (/run/log/journal/89e0a4c260ea431b8086270e344dff76) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:22:04.948541 systemd-modules-load[240]: Inserted module 'overlay' Jan 13 20:22:04.958229 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:22:04.961107 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:22:04.961603 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:22:04.964647 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 13 20:22:04.965601 kernel: Bridge firewalling registered Jan 13 20:22:04.966257 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:22:04.968138 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:22:04.974199 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:22:04.976549 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:04.977989 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:22:04.981696 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:22:04.996277 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:22:04.997463 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:22:05.001712 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:22:05.006134 dracut-cmdline[276]: dracut-dracut-053 Jan 13 20:22:05.008620 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:22:05.035498 systemd-resolved[282]: Positive Trust Anchors: Jan 13 20:22:05.035571 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:22:05.035601 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:22:05.040456 systemd-resolved[282]: Defaulting to hostname 'linux'. Jan 13 20:22:05.041477 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:22:05.046604 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:22:05.085097 kernel: SCSI subsystem initialized Jan 13 20:22:05.090083 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:22:05.101113 kernel: iscsi: registered transport (tcp) Jan 13 20:22:05.114441 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:22:05.114467 kernel: QLogic iSCSI HBA Driver Jan 13 20:22:05.159100 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:22:05.167264 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:22:05.184707 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:22:05.184773 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:22:05.185886 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:22:05.232095 kernel: raid6: neonx8 gen() 15788 MB/s Jan 13 20:22:05.249087 kernel: raid6: neonx4 gen() 15651 MB/s Jan 13 20:22:05.266086 kernel: raid6: neonx2 gen() 13092 MB/s Jan 13 20:22:05.283087 kernel: raid6: neonx1 gen() 10432 MB/s Jan 13 20:22:05.300086 kernel: raid6: int64x8 gen() 6933 MB/s Jan 13 20:22:05.317084 kernel: raid6: int64x4 gen() 7350 MB/s Jan 13 20:22:05.334079 kernel: raid6: int64x2 gen() 6086 MB/s Jan 13 20:22:05.351229 kernel: raid6: int64x1 gen() 5027 MB/s Jan 13 20:22:05.351253 kernel: raid6: using algorithm neonx8 gen() 15788 MB/s Jan 13 20:22:05.369249 kernel: raid6: .... xor() 11879 MB/s, rmw enabled Jan 13 20:22:05.369268 kernel: raid6: using neon recovery algorithm Jan 13 20:22:05.378417 kernel: xor: measuring software checksum speed Jan 13 20:22:05.378447 kernel: 8regs : 19726 MB/sec Jan 13 20:22:05.379122 kernel: 32regs : 19617 MB/sec Jan 13 20:22:05.380442 kernel: arm64_neon : 25728 MB/sec Jan 13 20:22:05.380456 kernel: xor: using function: arm64_neon (25728 MB/sec) Jan 13 20:22:05.433088 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:22:05.444340 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:22:05.455274 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:22:05.467562 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 13 20:22:05.470761 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:22:05.473856 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:22:05.489350 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jan 13 20:22:05.517717 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:22:05.526272 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:22:05.566268 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:22:05.575788 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:22:05.589416 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:22:05.591096 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:22:05.593235 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:22:05.595373 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:22:05.609512 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:22:05.614615 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 20:22:05.632416 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:22:05.632527 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:22:05.632540 kernel: GPT:9289727 != 19775487 Jan 13 20:22:05.632549 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:22:05.632559 kernel: GPT:9289727 != 19775487 Jan 13 20:22:05.632570 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:22:05.632580 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:22:05.620002 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:22:05.632553 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:22:05.632658 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:05.635840 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:22:05.638099 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:22:05.638389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:05.640576 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:05.647454 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:05.655081 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (513) Jan 13 20:22:05.655119 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (512) Jan 13 20:22:05.659889 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:22:05.664215 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:05.672504 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:22:05.677148 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:22:05.680899 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:22:05.682213 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:22:05.697220 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:22:05.699166 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:22:05.704954 disk-uuid[554]: Primary Header is updated. Jan 13 20:22:05.704954 disk-uuid[554]: Secondary Entries is updated. Jan 13 20:22:05.704954 disk-uuid[554]: Secondary Header is updated. Jan 13 20:22:05.709086 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:22:05.726679 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:06.723089 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:22:06.723650 disk-uuid[555]: The operation has completed successfully. Jan 13 20:22:06.742446 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:22:06.742548 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:22:06.765247 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:22:06.769041 sh[574]: Success Jan 13 20:22:06.788853 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:22:06.827575 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:22:06.829484 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:22:06.830507 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:22:06.844943 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:22:06.844992 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:06.845003 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:22:06.845885 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:22:06.846643 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:22:06.851111 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:22:06.852549 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:22:06.867268 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:22:06.869019 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:22:06.878836 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:06.878872 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:06.878887 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:22:06.882091 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:22:06.889240 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:22:06.891191 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:06.898126 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:22:06.903228 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:22:06.963534 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:22:06.975228 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:22:06.999448 ignition[670]: Ignition 2.20.0 Jan 13 20:22:06.999458 ignition[670]: Stage: fetch-offline Jan 13 20:22:06.999491 ignition[670]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:06.999499 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:22:06.999653 ignition[670]: parsed url from cmdline: "" Jan 13 20:22:06.999656 ignition[670]: no config URL provided Jan 13 20:22:06.999661 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:22:06.999672 ignition[670]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:22:07.004403 systemd-networkd[767]: lo: Link UP Jan 13 20:22:06.999699 ignition[670]: op(1): [started] loading QEMU firmware config module Jan 13 20:22:07.004407 systemd-networkd[767]: lo: Gained carrier Jan 13 20:22:06.999703 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:22:07.005181 systemd-networkd[767]: Enumeration completed Jan 13 20:22:07.008683 ignition[670]: op(1): [finished] loading QEMU firmware config module Jan 13 20:22:07.005677 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:07.005679 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:22:07.006570 systemd-networkd[767]: eth0: Link UP Jan 13 20:22:07.006573 systemd-networkd[767]: eth0: Gained carrier Jan 13 20:22:07.006579 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:07.006772 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:22:07.007907 systemd[1]: Reached target network.target - Network. Jan 13 20:22:07.019115 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:22:07.056230 ignition[670]: parsing config with SHA512: d9f44098d2e1e4582796f3287a9abfe64f4d6c3cf50af0edbbac9de9ab746af457f6b45e55d4648c81bd1b41b20ba9a833f5da237c11d5802a8a445128ce169b Jan 13 20:22:07.062040 unknown[670]: fetched base config from "system" Jan 13 20:22:07.062054 unknown[670]: fetched user config from "qemu" Jan 13 20:22:07.062886 ignition[670]: fetch-offline: fetch-offline passed Jan 13 20:22:07.062968 ignition[670]: Ignition finished successfully Jan 13 20:22:07.065408 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:22:07.068123 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:22:07.072224 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:22:07.084757 ignition[774]: Ignition 2.20.0 Jan 13 20:22:07.084768 ignition[774]: Stage: kargs Jan 13 20:22:07.084925 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:07.084934 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:22:07.085847 ignition[774]: kargs: kargs passed Jan 13 20:22:07.085891 ignition[774]: Ignition finished successfully Jan 13 20:22:07.089161 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:22:07.097225 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:22:07.106896 ignition[784]: Ignition 2.20.0 Jan 13 20:22:07.106905 ignition[784]: Stage: disks Jan 13 20:22:07.107080 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:07.107091 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:22:07.109569 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:22:07.107960 ignition[784]: disks: disks passed Jan 13 20:22:07.108002 ignition[784]: Ignition finished successfully Jan 13 20:22:07.112442 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:22:07.113831 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:22:07.115577 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:22:07.117448 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:22:07.119560 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:22:07.127204 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:22:07.136121 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:22:07.139373 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:22:07.141941 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:22:07.185879 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:22:07.187424 kernel: EXT4-fs (vda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:22:07.187149 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:22:07.201181 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:22:07.202810 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:22:07.203940 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:22:07.204007 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:22:07.211623 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (804) Jan 13 20:22:07.204056 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:22:07.217140 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:07.217159 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:07.217169 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:22:07.217185 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:22:07.208164 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:22:07.210305 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:22:07.218512 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:22:07.255519 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:22:07.259429 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:22:07.263019 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:22:07.267032 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:22:07.330256 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:22:07.344148 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:22:07.345687 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:22:07.351115 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:07.366443 ignition[919]: INFO : Ignition 2.20.0 Jan 13 20:22:07.366443 ignition[919]: INFO : Stage: mount Jan 13 20:22:07.368023 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:07.368023 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:22:07.368023 ignition[919]: INFO : mount: mount passed Jan 13 20:22:07.368023 ignition[919]: INFO : Ignition finished successfully Jan 13 20:22:07.369353 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:22:07.370516 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:22:07.384143 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:22:07.842517 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:22:07.855239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:22:07.861723 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (933) Jan 13 20:22:07.861753 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:07.862693 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:07.862712 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:22:07.866088 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:22:07.866554 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:22:07.881034 ignition[950]: INFO : Ignition 2.20.0 Jan 13 20:22:07.881034 ignition[950]: INFO : Stage: files Jan 13 20:22:07.882647 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:07.882647 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:22:07.882647 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:22:07.886038 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:22:07.886038 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:22:07.886038 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:22:07.886038 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:22:07.886038 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:22:07.886038 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:22:07.886038 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:22:07.884812 unknown[950]: wrote ssh authorized keys file for user: core Jan 13 20:22:08.081166 systemd-networkd[767]: eth0: Gained IPv6LL Jan 13 20:22:08.101153 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:22:08.402945 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:22:08.402945 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:22:08.406824 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 20:22:08.696134 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:22:08.756012 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:22:08.758007 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:22:08.758007 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:22:08.758007 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:22:08.758007 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:22:08.758007 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:22:08.758007 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:22:08.758007 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:22:08.758007 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:22:08.758007 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:22:08.758007 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:22:08.758007 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:22:08.758007 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:22:08.758007 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:22:08.758007 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 13 20:22:09.007924 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:22:09.265018 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:22:09.265018 ignition[950]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:22:09.268549 ignition[950]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:22:09.268549 ignition[950]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:22:09.268549 ignition[950]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:22:09.268549 ignition[950]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 20:22:09.268549 ignition[950]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:22:09.268549 ignition[950]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:22:09.268549 ignition[950]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 20:22:09.268549 ignition[950]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:22:09.300946 ignition[950]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:22:09.309205 ignition[950]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:22:09.311637 ignition[950]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:22:09.311637 ignition[950]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:22:09.311637 ignition[950]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:22:09.311637 ignition[950]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:22:09.311637 ignition[950]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:22:09.311637 ignition[950]: INFO : files: files passed Jan 13 20:22:09.311637 ignition[950]: INFO : Ignition finished successfully Jan 13 20:22:09.312654 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:22:09.333228 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:22:09.335093 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:22:09.339547 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:22:09.339661 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:22:09.346619 initrd-setup-root-after-ignition[979]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:22:09.349584 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:22:09.349584 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:22:09.352898 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:22:09.352458 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:22:09.354243 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:22:09.363540 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:22:09.380581 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:22:09.380688 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:22:09.383004 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:22:09.384632 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:22:09.386478 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:22:09.387159 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:22:09.403298 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:22:09.407181 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:22:09.417122 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:22:09.418345 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:22:09.420333 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:22:09.422048 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:22:09.422203 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:22:09.424674 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:22:09.426735 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:22:09.428418 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:22:09.430075 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:22:09.432146 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:22:09.434247 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:22:09.436131 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:22:09.438205 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:22:09.440254 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:22:09.441928 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:22:09.443515 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:22:09.443623 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:22:09.445853 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:22:09.446982 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:22:09.448879 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:22:09.449727 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:22:09.450895 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:22:09.451004 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:22:09.453618 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:22:09.453727 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:22:09.456018 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:22:09.457464 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:22:09.457562 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:22:09.459286 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:22:09.460952 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:22:09.462841 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:22:09.462925 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:22:09.464494 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:22:09.464565 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:22:09.468107 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:22:09.468220 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:22:09.469903 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:22:09.469992 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:22:09.481276 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:22:09.482132 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:22:09.482271 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:22:09.485542 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:22:09.487140 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:22:09.487282 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:22:09.490913 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:22:09.493792 ignition[1006]: INFO : Ignition 2.20.0 Jan 13 20:22:09.493792 ignition[1006]: INFO : Stage: umount Jan 13 20:22:09.491192 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:22:09.496788 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:09.496788 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:22:09.496788 ignition[1006]: INFO : umount: umount passed Jan 13 20:22:09.496788 ignition[1006]: INFO : Ignition finished successfully Jan 13 20:22:09.496500 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:22:09.496611 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:22:09.499760 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:22:09.501511 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:22:09.501607 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:22:09.504247 systemd[1]: Stopped target network.target - Network. Jan 13 20:22:09.505413 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:22:09.505551 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:22:09.507384 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:22:09.507431 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:22:09.509030 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:22:09.509084 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:22:09.510988 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:22:09.511036 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:22:09.514521 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:22:09.516152 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:22:09.525101 systemd-networkd[767]: eth0: DHCPv6 lease lost Jan 13 20:22:09.526773 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:22:09.526899 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:22:09.528681 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:22:09.528779 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:22:09.531261 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:22:09.531298 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:22:09.540159 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:22:09.541082 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:22:09.541142 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:22:09.543145 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:22:09.543197 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:22:09.545135 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:22:09.545187 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:22:09.547433 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:22:09.547476 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:22:09.549419 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:22:09.561456 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:22:09.561664 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:22:09.573815 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:22:09.574664 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:22:09.576479 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:22:09.576518 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:22:09.578347 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:22:09.578380 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:22:09.579976 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:22:09.580019 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:22:09.582583 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:22:09.582625 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:22:09.585221 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:22:09.585281 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:09.592189 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:22:09.593178 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:22:09.593231 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:22:09.595489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:22:09.595530 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:09.597737 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:22:09.599096 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:22:09.600768 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:22:09.600843 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:22:09.603359 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:22:09.604413 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:22:09.604466 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:22:09.606698 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:22:09.615198 systemd[1]: Switching root. Jan 13 20:22:09.642264 systemd-journald[239]: Journal stopped Jan 13 20:22:10.352480 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 13 20:22:10.352535 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:22:10.352547 kernel: SELinux: policy capability open_perms=1 Jan 13 20:22:10.352557 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:22:10.352566 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:22:10.352575 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:22:10.352589 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:22:10.352601 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:22:10.352611 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:22:10.352622 kernel: audit: type=1403 audit(1736799729.807:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:22:10.352633 systemd[1]: Successfully loaded SELinux policy in 31.404ms. Jan 13 20:22:10.352651 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.788ms. Jan 13 20:22:10.352663 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:22:10.352674 systemd[1]: Detected virtualization kvm. Jan 13 20:22:10.352685 systemd[1]: Detected architecture arm64. Jan 13 20:22:10.352697 systemd[1]: Detected first boot. Jan 13 20:22:10.352708 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:22:10.352718 zram_generator::config[1051]: No configuration found. Jan 13 20:22:10.352730 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:22:10.352740 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:22:10.352750 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:22:10.352761 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:22:10.352772 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:22:10.352785 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:22:10.352795 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:22:10.352805 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:22:10.352856 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:22:10.352872 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:22:10.352883 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:22:10.352893 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:22:10.352905 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:22:10.352916 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:22:10.352935 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:22:10.352946 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:22:10.352956 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:22:10.352967 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:22:10.352977 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:22:10.352989 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:22:10.352999 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:22:10.353009 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:22:10.353020 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:22:10.353033 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:22:10.353043 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:22:10.353054 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:22:10.353080 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:22:10.353093 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:22:10.353104 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:22:10.353114 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:22:10.353125 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:22:10.353138 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:22:10.353149 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:22:10.353161 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:22:10.353177 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:22:10.353190 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:22:10.353200 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:22:10.353210 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:22:10.353221 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:22:10.353232 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:22:10.353245 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:22:10.353256 systemd[1]: Reached target machines.target - Containers. Jan 13 20:22:10.353267 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:22:10.353278 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:22:10.353288 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:22:10.353299 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:22:10.353309 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:22:10.353319 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:22:10.353332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:22:10.353342 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:22:10.353353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:22:10.353364 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:22:10.353375 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:22:10.353385 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:22:10.353397 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:22:10.353408 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:22:10.353418 kernel: fuse: init (API version 7.39) Jan 13 20:22:10.353430 kernel: loop: module loaded Jan 13 20:22:10.353440 kernel: ACPI: bus type drm_connector registered Jan 13 20:22:10.353450 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:22:10.353461 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:22:10.353471 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:22:10.353482 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:22:10.353493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:22:10.353522 systemd-journald[1122]: Collecting audit messages is disabled. Jan 13 20:22:10.353546 systemd-journald[1122]: Journal started Jan 13 20:22:10.353567 systemd-journald[1122]: Runtime Journal (/run/log/journal/89e0a4c260ea431b8086270e344dff76) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:22:10.147105 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:22:10.172333 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:22:10.172674 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:22:10.355609 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:22:10.355641 systemd[1]: Stopped verity-setup.service. Jan 13 20:22:10.359685 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:22:10.360313 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:22:10.361449 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:22:10.362721 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:22:10.363789 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:22:10.365007 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:22:10.366241 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:22:10.369097 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:22:10.370465 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:22:10.371950 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:22:10.372119 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:22:10.373502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:22:10.373627 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:22:10.376395 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:22:10.376535 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:22:10.377811 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:22:10.377943 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:22:10.379412 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:22:10.379540 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:22:10.380831 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:22:10.380962 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:22:10.382293 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:22:10.383729 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:22:10.387088 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:22:10.398621 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:22:10.412164 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:22:10.414144 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:22:10.415263 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:22:10.415305 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:22:10.416976 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:22:10.419118 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:22:10.421071 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:22:10.422000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:22:10.423443 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:22:10.425269 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:22:10.426546 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:22:10.430222 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:22:10.431401 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:22:10.433126 systemd-journald[1122]: Time spent on flushing to /var/log/journal/89e0a4c260ea431b8086270e344dff76 is 15.043ms for 856 entries. Jan 13 20:22:10.433126 systemd-journald[1122]: System Journal (/var/log/journal/89e0a4c260ea431b8086270e344dff76) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:22:10.452613 systemd-journald[1122]: Received client request to flush runtime journal. Jan 13 20:22:10.434299 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:22:10.437398 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:22:10.440300 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:22:10.446090 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:22:10.447489 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:22:10.448731 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:22:10.450083 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:22:10.451497 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:22:10.454240 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:22:10.457700 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:22:10.460519 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:22:10.465078 kernel: loop0: detected capacity change from 0 to 116808 Jan 13 20:22:10.467549 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:22:10.479081 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:22:10.485141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:22:10.490149 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:22:10.490818 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:22:10.495916 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:22:10.507088 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:22:10.509228 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:22:10.521136 kernel: loop1: detected capacity change from 0 to 113536 Jan 13 20:22:10.524254 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 13 20:22:10.524271 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Jan 13 20:22:10.528330 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:22:10.567541 kernel: loop2: detected capacity change from 0 to 189592 Jan 13 20:22:10.600089 kernel: loop3: detected capacity change from 0 to 116808 Jan 13 20:22:10.606480 kernel: loop4: detected capacity change from 0 to 113536 Jan 13 20:22:10.610143 kernel: loop5: detected capacity change from 0 to 189592 Jan 13 20:22:10.613021 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:22:10.613420 (sd-merge)[1186]: Merged extensions into '/usr'. Jan 13 20:22:10.620618 systemd[1]: Reloading requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:22:10.620633 systemd[1]: Reloading... Jan 13 20:22:10.669123 zram_generator::config[1209]: No configuration found. Jan 13 20:22:10.714751 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:22:10.761507 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:22:10.795905 systemd[1]: Reloading finished in 174 ms. Jan 13 20:22:10.833036 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:22:10.834536 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:22:10.847364 systemd[1]: Starting ensure-sysext.service... Jan 13 20:22:10.848971 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:22:10.862497 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:22:10.862512 systemd[1]: Reloading... Jan 13 20:22:10.868126 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:22:10.868639 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:22:10.869399 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:22:10.869701 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 13 20:22:10.869817 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 13 20:22:10.871907 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:22:10.872003 systemd-tmpfiles[1247]: Skipping /boot Jan 13 20:22:10.878676 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:22:10.878785 systemd-tmpfiles[1247]: Skipping /boot Jan 13 20:22:10.909094 zram_generator::config[1274]: No configuration found. Jan 13 20:22:10.987030 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:22:11.021860 systemd[1]: Reloading finished in 159 ms. Jan 13 20:22:11.036949 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:22:11.047446 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:22:11.053155 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:22:11.055351 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:22:11.057284 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:22:11.061421 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:22:11.064326 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:22:11.069974 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:22:11.080351 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:22:11.081873 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:22:11.087829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:22:11.092492 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:22:11.094781 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:22:11.100387 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:22:11.100920 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Jan 13 20:22:11.101484 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:22:11.104701 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:22:11.107897 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:22:11.108275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:22:11.112444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:22:11.112566 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:22:11.114615 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:22:11.116128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:22:11.119850 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:22:11.122304 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:22:11.129590 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:22:11.144432 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:22:11.147526 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:22:11.150775 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:22:11.152332 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:22:11.153017 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:22:11.155402 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:22:11.160116 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:22:11.163604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:22:11.163730 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:22:11.167229 augenrules[1372]: No rules Jan 13 20:22:11.165692 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:22:11.166132 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:22:11.167958 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:22:11.168212 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:22:11.170309 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:22:11.170426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:22:11.182269 systemd[1]: Finished ensure-sysext.service. Jan 13 20:22:11.193704 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:22:11.194734 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:22:11.203526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:22:11.206920 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:22:11.210679 systemd-resolved[1313]: Positive Trust Anchors: Jan 13 20:22:11.210752 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:22:11.210787 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:22:11.212230 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:22:11.218004 systemd-resolved[1313]: Defaulting to hostname 'linux'. Jan 13 20:22:11.222261 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:22:11.223724 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:22:11.225462 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:22:11.227523 augenrules[1384]: /sbin/augenrules: No change Jan 13 20:22:11.232302 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:22:11.233983 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:22:11.234344 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:22:11.236306 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:22:11.236438 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:22:11.238913 augenrules[1412]: No rules Jan 13 20:22:11.237716 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:22:11.237835 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:22:11.239277 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:22:11.239448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:22:11.241078 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1368) Jan 13 20:22:11.243732 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:22:11.243888 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:22:11.245181 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:22:11.245305 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:22:11.247404 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 20:22:11.262782 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:22:11.264074 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:22:11.264133 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:22:11.270394 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:22:11.284334 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:22:11.310744 systemd-networkd[1403]: lo: Link UP Jan 13 20:22:11.310754 systemd-networkd[1403]: lo: Gained carrier Jan 13 20:22:11.311526 systemd-networkd[1403]: Enumeration completed Jan 13 20:22:11.312914 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:22:11.314446 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:22:11.315664 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:11.315672 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:22:11.316253 systemd-networkd[1403]: eth0: Link UP Jan 13 20:22:11.316260 systemd-networkd[1403]: eth0: Gained carrier Jan 13 20:22:11.316272 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:11.316911 systemd[1]: Reached target network.target - Network. Jan 13 20:22:11.330314 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:22:11.331642 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:22:11.333510 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:22:11.334134 systemd-networkd[1403]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:22:11.335644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:11.336456 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. Jan 13 20:22:11.337153 systemd-timesyncd[1409]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:22:11.337215 systemd-timesyncd[1409]: Initial clock synchronization to Mon 2025-01-13 20:22:11.374384 UTC. Jan 13 20:22:11.343122 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:22:11.345794 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:22:11.365617 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:22:11.378955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:11.405558 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:22:11.407003 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:22:11.408196 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:22:11.409252 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:22:11.410449 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:22:11.411917 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:22:11.413105 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:22:11.414155 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:22:11.415337 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:22:11.415371 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:22:11.416116 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:22:11.417866 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:22:11.420385 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:22:11.434932 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:22:11.437155 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:22:11.438710 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:22:11.439849 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:22:11.440858 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:22:11.441878 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:22:11.441913 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:22:11.442798 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:22:11.444725 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:22:11.447272 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:22:11.447639 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:22:11.450308 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:22:11.451577 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:22:11.455266 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:22:11.457115 jq[1445]: false Jan 13 20:22:11.459218 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:22:11.461320 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:22:11.465172 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:22:11.469705 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:22:11.477424 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:22:11.478653 extend-filesystems[1446]: Found loop3 Jan 13 20:22:11.478653 extend-filesystems[1446]: Found loop4 Jan 13 20:22:11.478653 extend-filesystems[1446]: Found loop5 Jan 13 20:22:11.478653 extend-filesystems[1446]: Found vda Jan 13 20:22:11.478653 extend-filesystems[1446]: Found vda1 Jan 13 20:22:11.478653 extend-filesystems[1446]: Found vda2 Jan 13 20:22:11.478653 extend-filesystems[1446]: Found vda3 Jan 13 20:22:11.478653 extend-filesystems[1446]: Found usr Jan 13 20:22:11.478653 extend-filesystems[1446]: Found vda4 Jan 13 20:22:11.478653 extend-filesystems[1446]: Found vda6 Jan 13 20:22:11.478653 extend-filesystems[1446]: Found vda7 Jan 13 20:22:11.478653 extend-filesystems[1446]: Found vda9 Jan 13 20:22:11.478653 extend-filesystems[1446]: Checking size of /dev/vda9 Jan 13 20:22:11.477900 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:22:11.479122 dbus-daemon[1444]: [system] SELinux support is enabled Jan 13 20:22:11.511541 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1362) Jan 13 20:22:11.480252 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:22:11.511710 extend-filesystems[1446]: Resized partition /dev/vda9 Jan 13 20:22:11.485786 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:22:11.519348 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:22:11.524177 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:22:11.524217 jq[1463]: true Jan 13 20:22:11.487499 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:22:11.494879 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:22:11.502846 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:22:11.502999 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:22:11.503417 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:22:11.503556 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:22:11.509550 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:22:11.509689 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:22:11.545153 tar[1469]: linux-arm64/helm Jan 13 20:22:11.537031 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:22:11.546737 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:22:11.546776 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:22:11.548866 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:22:11.550183 jq[1470]: true Jan 13 20:22:11.548897 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:22:11.560483 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:22:11.570178 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:22:11.570652 systemd-logind[1453]: New seat seat0. Jan 13 20:22:11.572273 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:22:11.574494 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:22:11.574494 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:22:11.574494 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:22:11.589021 extend-filesystems[1446]: Resized filesystem in /dev/vda9 Jan 13 20:22:11.577495 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:22:11.590092 update_engine[1461]: I20250113 20:22:11.575827 1461 main.cc:92] Flatcar Update Engine starting Jan 13 20:22:11.590092 update_engine[1461]: I20250113 20:22:11.581424 1461 update_check_scheduler.cc:74] Next update check in 4m21s Jan 13 20:22:11.578685 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:22:11.583752 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:22:11.593320 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:22:11.619442 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:22:11.617222 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:22:11.619986 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:22:11.651668 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:22:11.743410 containerd[1472]: time="2025-01-13T20:22:11.743325800Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:22:11.774246 containerd[1472]: time="2025-01-13T20:22:11.774137440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:11.778233 containerd[1472]: time="2025-01-13T20:22:11.778191080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:11.778233 containerd[1472]: time="2025-01-13T20:22:11.778230680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:22:11.778325 containerd[1472]: time="2025-01-13T20:22:11.778247240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:22:11.778420 containerd[1472]: time="2025-01-13T20:22:11.778401440Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:22:11.778446 containerd[1472]: time="2025-01-13T20:22:11.778424000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:11.778492 containerd[1472]: time="2025-01-13T20:22:11.778477880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:11.778514 containerd[1472]: time="2025-01-13T20:22:11.778492400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:11.778663 containerd[1472]: time="2025-01-13T20:22:11.778647000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:11.778682 containerd[1472]: time="2025-01-13T20:22:11.778664160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:11.778682 containerd[1472]: time="2025-01-13T20:22:11.778678480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:11.778718 containerd[1472]: time="2025-01-13T20:22:11.778687280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:11.778899 containerd[1472]: time="2025-01-13T20:22:11.778882640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:11.779119 containerd[1472]: time="2025-01-13T20:22:11.779101120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:11.779240 containerd[1472]: time="2025-01-13T20:22:11.779214240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:11.779264 containerd[1472]: time="2025-01-13T20:22:11.779240320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:22:11.779329 containerd[1472]: time="2025-01-13T20:22:11.779316200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:22:11.779369 containerd[1472]: time="2025-01-13T20:22:11.779358000Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:22:11.787246 containerd[1472]: time="2025-01-13T20:22:11.787214640Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:22:11.787334 containerd[1472]: time="2025-01-13T20:22:11.787270400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:22:11.787334 containerd[1472]: time="2025-01-13T20:22:11.787286760Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:22:11.787334 containerd[1472]: time="2025-01-13T20:22:11.787302080Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:22:11.787334 containerd[1472]: time="2025-01-13T20:22:11.787319160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:22:11.787504 containerd[1472]: time="2025-01-13T20:22:11.787475000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:22:11.788636 containerd[1472]: time="2025-01-13T20:22:11.787749800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:22:11.788636 containerd[1472]: time="2025-01-13T20:22:11.787902920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:22:11.788636 containerd[1472]: time="2025-01-13T20:22:11.787920080Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:22:11.788636 containerd[1472]: time="2025-01-13T20:22:11.787936000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:22:11.788636 containerd[1472]: time="2025-01-13T20:22:11.787950280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.788636 containerd[1472]: time="2025-01-13T20:22:11.787963200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.788636 containerd[1472]: time="2025-01-13T20:22:11.787976760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.788636 containerd[1472]: time="2025-01-13T20:22:11.787990160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.788636 containerd[1472]: time="2025-01-13T20:22:11.788003360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.788636 containerd[1472]: time="2025-01-13T20:22:11.788016280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.788636 containerd[1472]: time="2025-01-13T20:22:11.788036360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.788636 containerd[1472]: time="2025-01-13T20:22:11.788048880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.788636 containerd[1472]: time="2025-01-13T20:22:11.788117440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788636 containerd[1472]: time="2025-01-13T20:22:11.788142880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788156080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788179880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788192520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788205560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788218200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788230080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788244240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788260160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788274360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788291800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788303840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788320120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788341160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788355200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.788904 containerd[1472]: time="2025-01-13T20:22:11.788365680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:22:11.790052 containerd[1472]: time="2025-01-13T20:22:11.789956920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:22:11.790173 containerd[1472]: time="2025-01-13T20:22:11.790143760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:22:11.790304 containerd[1472]: time="2025-01-13T20:22:11.790234000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:22:11.790369 containerd[1472]: time="2025-01-13T20:22:11.790354720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:22:11.790414 containerd[1472]: time="2025-01-13T20:22:11.790403920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.790531 containerd[1472]: time="2025-01-13T20:22:11.790515480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:22:11.790583 containerd[1472]: time="2025-01-13T20:22:11.790571320Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:22:11.790681 containerd[1472]: time="2025-01-13T20:22:11.790620080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.791286 containerd[1472]: time="2025-01-13T20:22:11.791180640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:22:11.791519 containerd[1472]: time="2025-01-13T20:22:11.791498320Z" level=info msg="Connect containerd service" Jan 13 20:22:11.791658 containerd[1472]: time="2025-01-13T20:22:11.791596320Z" level=info msg="using legacy CRI server" Jan 13 20:22:11.791717 containerd[1472]: time="2025-01-13T20:22:11.791703560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:22:11.792252 containerd[1472]: time="2025-01-13T20:22:11.792107840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:22:11.793220 containerd[1472]: time="2025-01-13T20:22:11.793188800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:22:11.793600 containerd[1472]: time="2025-01-13T20:22:11.793529840Z" level=info msg="Start subscribing containerd event" Jan 13 20:22:11.793600 containerd[1472]: time="2025-01-13T20:22:11.793583160Z" level=info msg="Start recovering state" Jan 13 20:22:11.793658 containerd[1472]: time="2025-01-13T20:22:11.793645520Z" level=info msg="Start event monitor" Jan 13 20:22:11.793677 containerd[1472]: time="2025-01-13T20:22:11.793656320Z" level=info msg="Start snapshots syncer" Jan 13 20:22:11.793677 containerd[1472]: time="2025-01-13T20:22:11.793666520Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:22:11.793677 containerd[1472]: time="2025-01-13T20:22:11.793673200Z" level=info msg="Start streaming server" Jan 13 20:22:11.795785 containerd[1472]: time="2025-01-13T20:22:11.794513840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:22:11.795785 containerd[1472]: time="2025-01-13T20:22:11.794589200Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:22:11.795785 containerd[1472]: time="2025-01-13T20:22:11.794644640Z" level=info msg="containerd successfully booted in 0.054777s" Jan 13 20:22:11.794735 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:22:11.897659 tar[1469]: linux-arm64/LICENSE Jan 13 20:22:11.897659 tar[1469]: linux-arm64/README.md Jan 13 20:22:11.909430 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:22:12.078921 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:22:12.098119 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:22:12.114656 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:22:12.119626 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:22:12.119851 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:22:12.123127 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:22:12.135165 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:22:12.138120 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:22:12.140093 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:22:12.141288 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:22:12.561500 systemd-networkd[1403]: eth0: Gained IPv6LL Jan 13 20:22:12.563836 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:22:12.565625 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:22:12.577287 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:22:12.579611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:12.581568 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:22:12.596861 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:22:12.597053 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:22:12.598651 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:22:12.600922 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:22:13.059775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:13.061414 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:22:13.063773 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:22:13.065140 systemd[1]: Startup finished in 589ms (kernel) + 5.093s (initrd) + 3.300s (userspace) = 8.983s. Jan 13 20:22:13.485360 kubelet[1556]: E0113 20:22:13.485249 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:22:13.487126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:22:13.487256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:22:17.271696 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:22:17.272829 systemd[1]: Started sshd@0-10.0.0.109:22-10.0.0.1:36428.service - OpenSSH per-connection server daemon (10.0.0.1:36428). Jan 13 20:22:17.340137 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 36428 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:22:17.343370 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:22:17.357011 systemd-logind[1453]: New session 1 of user core. Jan 13 20:22:17.357995 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:22:17.371304 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:22:17.380248 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:22:17.382350 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:22:17.388757 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:22:17.460480 systemd[1573]: Queued start job for default target default.target. Jan 13 20:22:17.475147 systemd[1573]: Created slice app.slice - User Application Slice. Jan 13 20:22:17.475188 systemd[1573]: Reached target paths.target - Paths. Jan 13 20:22:17.475199 systemd[1573]: Reached target timers.target - Timers. Jan 13 20:22:17.476460 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:22:17.485913 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:22:17.485974 systemd[1573]: Reached target sockets.target - Sockets. Jan 13 20:22:17.485986 systemd[1573]: Reached target basic.target - Basic System. Jan 13 20:22:17.486019 systemd[1573]: Reached target default.target - Main User Target. Jan 13 20:22:17.486045 systemd[1573]: Startup finished in 92ms. Jan 13 20:22:17.486314 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:22:17.487620 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:22:17.551017 systemd[1]: Started sshd@1-10.0.0.109:22-10.0.0.1:36444.service - OpenSSH per-connection server daemon (10.0.0.1:36444). Jan 13 20:22:17.596140 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 36444 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:22:17.597528 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:22:17.601780 systemd-logind[1453]: New session 2 of user core. Jan 13 20:22:17.617256 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:22:17.669221 sshd[1586]: Connection closed by 10.0.0.1 port 36444 Jan 13 20:22:17.669567 sshd-session[1584]: pam_unix(sshd:session): session closed for user core Jan 13 20:22:17.685694 systemd[1]: sshd@1-10.0.0.109:22-10.0.0.1:36444.service: Deactivated successfully. Jan 13 20:22:17.688703 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:22:17.689371 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:22:17.691281 systemd[1]: Started sshd@2-10.0.0.109:22-10.0.0.1:36458.service - OpenSSH per-connection server daemon (10.0.0.1:36458). Jan 13 20:22:17.692155 systemd-logind[1453]: Removed session 2. Jan 13 20:22:17.735800 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 36458 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:22:17.737095 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:22:17.741237 systemd-logind[1453]: New session 3 of user core. Jan 13 20:22:17.748240 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:22:17.796290 sshd[1593]: Connection closed by 10.0.0.1 port 36458 Jan 13 20:22:17.796675 sshd-session[1591]: pam_unix(sshd:session): session closed for user core Jan 13 20:22:17.809554 systemd[1]: sshd@2-10.0.0.109:22-10.0.0.1:36458.service: Deactivated successfully. Jan 13 20:22:17.811144 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:22:17.813182 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:22:17.814509 systemd[1]: Started sshd@3-10.0.0.109:22-10.0.0.1:36472.service - OpenSSH per-connection server daemon (10.0.0.1:36472). Jan 13 20:22:17.815409 systemd-logind[1453]: Removed session 3. Jan 13 20:22:17.858341 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 36472 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:22:17.859642 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:22:17.863918 systemd-logind[1453]: New session 4 of user core. Jan 13 20:22:17.871195 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:22:17.922130 sshd[1600]: Connection closed by 10.0.0.1 port 36472 Jan 13 20:22:17.922440 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Jan 13 20:22:17.935515 systemd[1]: sshd@3-10.0.0.109:22-10.0.0.1:36472.service: Deactivated successfully. Jan 13 20:22:17.937089 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:22:17.938279 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:22:17.939379 systemd[1]: Started sshd@4-10.0.0.109:22-10.0.0.1:36482.service - OpenSSH per-connection server daemon (10.0.0.1:36482). Jan 13 20:22:17.940248 systemd-logind[1453]: Removed session 4. Jan 13 20:22:17.984509 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 36482 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:22:17.985887 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:22:17.990184 systemd-logind[1453]: New session 5 of user core. Jan 13 20:22:17.997282 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:22:18.058873 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:22:18.059578 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:22:18.079036 sudo[1608]: pam_unix(sudo:session): session closed for user root Jan 13 20:22:18.080684 sshd[1607]: Connection closed by 10.0.0.1 port 36482 Jan 13 20:22:18.081331 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Jan 13 20:22:18.088589 systemd[1]: sshd@4-10.0.0.109:22-10.0.0.1:36482.service: Deactivated successfully. Jan 13 20:22:18.091473 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:22:18.095994 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:22:18.097933 systemd[1]: Started sshd@5-10.0.0.109:22-10.0.0.1:36498.service - OpenSSH per-connection server daemon (10.0.0.1:36498). Jan 13 20:22:18.099889 systemd-logind[1453]: Removed session 5. Jan 13 20:22:18.143766 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 36498 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:22:18.145209 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:22:18.150728 systemd-logind[1453]: New session 6 of user core. Jan 13 20:22:18.156255 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:22:18.209216 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:22:18.209497 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:22:18.212488 sudo[1617]: pam_unix(sudo:session): session closed for user root Jan 13 20:22:18.217592 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:22:18.220268 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:22:18.241657 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:22:18.267413 augenrules[1639]: No rules Jan 13 20:22:18.268700 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:22:18.268967 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:22:18.270835 sudo[1616]: pam_unix(sudo:session): session closed for user root Jan 13 20:22:18.272203 sshd[1615]: Connection closed by 10.0.0.1 port 36498 Jan 13 20:22:18.273799 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Jan 13 20:22:18.287578 systemd[1]: sshd@5-10.0.0.109:22-10.0.0.1:36498.service: Deactivated successfully. Jan 13 20:22:18.290262 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:22:18.291602 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:22:18.302770 systemd[1]: Started sshd@6-10.0.0.109:22-10.0.0.1:36514.service - OpenSSH per-connection server daemon (10.0.0.1:36514). Jan 13 20:22:18.307497 systemd-logind[1453]: Removed session 6. Jan 13 20:22:18.366932 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 36514 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:22:18.368507 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:22:18.372857 systemd-logind[1453]: New session 7 of user core. Jan 13 20:22:18.383243 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:22:18.435262 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:22:18.435557 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:22:18.757392 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:22:18.757416 (dockerd)[1672]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:22:19.000318 dockerd[1672]: time="2025-01-13T20:22:19.000252754Z" level=info msg="Starting up" Jan 13 20:22:19.145820 dockerd[1672]: time="2025-01-13T20:22:19.145716385Z" level=info msg="Loading containers: start." Jan 13 20:22:19.287096 kernel: Initializing XFRM netlink socket Jan 13 20:22:19.354969 systemd-networkd[1403]: docker0: Link UP Jan 13 20:22:19.386598 dockerd[1672]: time="2025-01-13T20:22:19.386489033Z" level=info msg="Loading containers: done." Jan 13 20:22:19.400115 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3518409582-merged.mount: Deactivated successfully. Jan 13 20:22:19.402308 dockerd[1672]: time="2025-01-13T20:22:19.401872726Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:22:19.402308 dockerd[1672]: time="2025-01-13T20:22:19.401978723Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:22:19.402308 dockerd[1672]: time="2025-01-13T20:22:19.402114524Z" level=info msg="Daemon has completed initialization" Jan 13 20:22:19.432601 dockerd[1672]: time="2025-01-13T20:22:19.432537439Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:22:19.432763 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:22:19.962223 containerd[1472]: time="2025-01-13T20:22:19.962172408Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 20:22:20.724969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3933586530.mount: Deactivated successfully. Jan 13 20:22:22.366574 containerd[1472]: time="2025-01-13T20:22:22.366525011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:22.368189 containerd[1472]: time="2025-01-13T20:22:22.368148115Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615587" Jan 13 20:22:22.369192 containerd[1472]: time="2025-01-13T20:22:22.369154665Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:22.371777 containerd[1472]: time="2025-01-13T20:22:22.371746513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:22.373110 containerd[1472]: time="2025-01-13T20:22:22.372936367Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 2.410715609s" Jan 13 20:22:22.373110 containerd[1472]: time="2025-01-13T20:22:22.372970168Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Jan 13 20:22:22.373830 containerd[1472]: time="2025-01-13T20:22:22.373665739Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 20:22:23.737532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:22:23.750258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:23.844370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:23.848202 (kubelet)[1932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:22:23.880876 kubelet[1932]: E0113 20:22:23.880785 1932 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:22:23.883336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:22:23.883459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:22:24.585464 containerd[1472]: time="2025-01-13T20:22:24.585414528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:24.586437 containerd[1472]: time="2025-01-13T20:22:24.586148116Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470098" Jan 13 20:22:24.587141 containerd[1472]: time="2025-01-13T20:22:24.587106345Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:24.590117 containerd[1472]: time="2025-01-13T20:22:24.590086147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:24.591273 containerd[1472]: time="2025-01-13T20:22:24.591239226Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 2.217543813s" Jan 13 20:22:24.591322 containerd[1472]: time="2025-01-13T20:22:24.591270019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Jan 13 20:22:24.591998 containerd[1472]: time="2025-01-13T20:22:24.591751496Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 20:22:26.277720 containerd[1472]: time="2025-01-13T20:22:26.277665669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:26.278571 containerd[1472]: time="2025-01-13T20:22:26.278521557Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024204" Jan 13 20:22:26.279655 containerd[1472]: time="2025-01-13T20:22:26.279589326Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:26.283103 containerd[1472]: time="2025-01-13T20:22:26.282877871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:26.284193 containerd[1472]: time="2025-01-13T20:22:26.284146509Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.692361258s" Jan 13 20:22:26.284193 containerd[1472]: time="2025-01-13T20:22:26.284191512Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Jan 13 20:22:26.284666 containerd[1472]: time="2025-01-13T20:22:26.284640135Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:22:27.359675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177221920.mount: Deactivated successfully. Jan 13 20:22:28.599008 containerd[1472]: time="2025-01-13T20:22:28.598941753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:28.599496 containerd[1472]: time="2025-01-13T20:22:28.599462546Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771428" Jan 13 20:22:28.600245 containerd[1472]: time="2025-01-13T20:22:28.600216412Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:28.602689 containerd[1472]: time="2025-01-13T20:22:28.602643586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:28.603598 containerd[1472]: time="2025-01-13T20:22:28.603390166Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 2.318715199s" Jan 13 20:22:28.603598 containerd[1472]: time="2025-01-13T20:22:28.603420912Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Jan 13 20:22:28.604006 containerd[1472]: time="2025-01-13T20:22:28.603882095Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:22:29.328329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1687100239.mount: Deactivated successfully. Jan 13 20:22:30.128213 containerd[1472]: time="2025-01-13T20:22:30.128148661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:30.128686 containerd[1472]: time="2025-01-13T20:22:30.128628931Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 13 20:22:30.129610 containerd[1472]: time="2025-01-13T20:22:30.129573821Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:30.133287 containerd[1472]: time="2025-01-13T20:22:30.133237414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:30.134234 containerd[1472]: time="2025-01-13T20:22:30.134197595Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.530276549s" Jan 13 20:22:30.134272 containerd[1472]: time="2025-01-13T20:22:30.134237984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:22:30.134809 containerd[1472]: time="2025-01-13T20:22:30.134775096Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 20:22:30.646755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3663581055.mount: Deactivated successfully. Jan 13 20:22:30.651954 containerd[1472]: time="2025-01-13T20:22:30.651902992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:30.653045 containerd[1472]: time="2025-01-13T20:22:30.652827787Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 13 20:22:30.653825 containerd[1472]: time="2025-01-13T20:22:30.653786727Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:30.656285 containerd[1472]: time="2025-01-13T20:22:30.656250164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:30.657206 containerd[1472]: time="2025-01-13T20:22:30.657162710Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 522.352469ms" Jan 13 20:22:30.657206 containerd[1472]: time="2025-01-13T20:22:30.657201739Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 13 20:22:30.657933 containerd[1472]: time="2025-01-13T20:22:30.657719437Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 20:22:31.242165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2736483006.mount: Deactivated successfully. Jan 13 20:22:34.133945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:22:34.143251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:34.237684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:34.243451 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:22:34.281807 kubelet[2064]: E0113 20:22:34.281692 2064 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:22:34.284264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:22:34.284416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:22:34.679025 containerd[1472]: time="2025-01-13T20:22:34.678977043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:34.680053 containerd[1472]: time="2025-01-13T20:22:34.679711778Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Jan 13 20:22:34.680764 containerd[1472]: time="2025-01-13T20:22:34.680720586Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:34.684246 containerd[1472]: time="2025-01-13T20:22:34.684206512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:22:34.686153 containerd[1472]: time="2025-01-13T20:22:34.686105222Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.028355364s" Jan 13 20:22:34.686153 containerd[1472]: time="2025-01-13T20:22:34.686148927Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 13 20:22:38.428521 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:38.439380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:38.460876 systemd[1]: Reloading requested from client PID 2105 ('systemctl') (unit session-7.scope)... Jan 13 20:22:38.460893 systemd[1]: Reloading... Jan 13 20:22:38.525185 zram_generator::config[2147]: No configuration found. Jan 13 20:22:38.739602 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:22:38.791188 systemd[1]: Reloading finished in 329 ms. Jan 13 20:22:38.834219 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:22:38.834290 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:22:38.834507 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:38.836780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:38.940784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:38.946113 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:22:38.981070 kubelet[2190]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:22:38.981070 kubelet[2190]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:22:38.981070 kubelet[2190]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:22:38.981434 kubelet[2190]: I0113 20:22:38.981247 2190 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:22:39.874237 kubelet[2190]: I0113 20:22:39.874186 2190 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:22:39.874237 kubelet[2190]: I0113 20:22:39.874220 2190 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:22:39.874508 kubelet[2190]: I0113 20:22:39.874479 2190 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:22:39.992894 kubelet[2190]: E0113 20:22:39.992849 2190 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:22:39.995838 kubelet[2190]: I0113 20:22:39.995553 2190 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:22:40.008314 kubelet[2190]: E0113 20:22:40.008281 2190 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:22:40.008404 kubelet[2190]: I0113 20:22:40.008392 2190 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:22:40.011990 kubelet[2190]: I0113 20:22:40.011968 2190 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:22:40.013185 kubelet[2190]: I0113 20:22:40.013163 2190 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:22:40.013436 kubelet[2190]: I0113 20:22:40.013403 2190 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:22:40.013671 kubelet[2190]: I0113 20:22:40.013501 2190 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:22:40.014048 kubelet[2190]: I0113 20:22:40.014033 2190 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:22:40.014133 kubelet[2190]: I0113 20:22:40.014123 2190 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:22:40.014766 kubelet[2190]: I0113 20:22:40.014404 2190 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:22:40.016279 kubelet[2190]: I0113 20:22:40.016257 2190 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:22:40.016457 kubelet[2190]: I0113 20:22:40.016445 2190 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:22:40.016603 kubelet[2190]: I0113 20:22:40.016592 2190 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:22:40.016739 kubelet[2190]: I0113 20:22:40.016726 2190 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:22:40.019680 kubelet[2190]: W0113 20:22:40.019630 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused Jan 13 20:22:40.020444 kubelet[2190]: W0113 20:22:40.020402 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused Jan 13 20:22:40.020487 kubelet[2190]: E0113 20:22:40.020464 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:22:40.020520 kubelet[2190]: E0113 20:22:40.020500 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:22:40.023627 kubelet[2190]: I0113 20:22:40.023609 2190 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:22:40.025395 kubelet[2190]: I0113 20:22:40.025372 2190 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:22:40.026095 kubelet[2190]: W0113 20:22:40.026078 2190 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:22:40.032713 kubelet[2190]: I0113 20:22:40.030020 2190 server.go:1269] "Started kubelet" Jan 13 20:22:40.032713 kubelet[2190]: I0113 20:22:40.030942 2190 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:22:40.032713 kubelet[2190]: I0113 20:22:40.032436 2190 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:22:40.036896 kubelet[2190]: I0113 20:22:40.036581 2190 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:22:40.036896 kubelet[2190]: I0113 20:22:40.036848 2190 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:22:40.037002 kubelet[2190]: E0113 20:22:40.034102 2190 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.109:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5a2e06923a07 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:22:40.029989383 +0000 UTC m=+1.080518876,LastTimestamp:2025-01-13 20:22:40.029989383 +0000 UTC m=+1.080518876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:22:40.038713 kubelet[2190]: I0113 20:22:40.038680 2190 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:22:40.039455 kubelet[2190]: I0113 20:22:40.039192 2190 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:22:40.039683 kubelet[2190]: I0113 20:22:40.039481 2190 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:22:40.039683 kubelet[2190]: E0113 20:22:40.039553 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:22:40.040640 kubelet[2190]: I0113 20:22:40.040405 2190 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:22:40.040640 kubelet[2190]: E0113 20:22:40.040549 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="200ms" Jan 13 20:22:40.040818 kubelet[2190]: I0113 20:22:40.040801 2190 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:22:40.041251 kubelet[2190]: E0113 20:22:40.041233 2190 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:22:40.041332 kubelet[2190]: W0113 20:22:40.041291 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused Jan 13 20:22:40.041368 kubelet[2190]: E0113 20:22:40.041344 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:22:40.041409 kubelet[2190]: I0113 20:22:40.041387 2190 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:22:40.041494 kubelet[2190]: I0113 20:22:40.041473 2190 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:22:40.042963 kubelet[2190]: I0113 20:22:40.042931 2190 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:22:40.053252 kubelet[2190]: I0113 20:22:40.053231 2190 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:22:40.053252 kubelet[2190]: I0113 20:22:40.053248 2190 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:22:40.053385 kubelet[2190]: I0113 20:22:40.053267 2190 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:22:40.054969 kubelet[2190]: I0113 20:22:40.054922 2190 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:22:40.055923 kubelet[2190]: I0113 20:22:40.055900 2190 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:22:40.055923 kubelet[2190]: I0113 20:22:40.055924 2190 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:22:40.055997 kubelet[2190]: I0113 20:22:40.055942 2190 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:22:40.056239 kubelet[2190]: E0113 20:22:40.055983 2190 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:22:40.056710 kubelet[2190]: W0113 20:22:40.056670 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused Jan 13 20:22:40.056810 kubelet[2190]: E0113 20:22:40.056783 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:22:40.057582 kubelet[2190]: I0113 20:22:40.057559 2190 policy_none.go:49] "None policy: Start" Jan 13 20:22:40.058057 kubelet[2190]: I0113 20:22:40.058040 2190 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:22:40.058184 kubelet[2190]: I0113 20:22:40.058145 2190 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:22:40.064052 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:22:40.074491 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:22:40.077028 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:22:40.087030 kubelet[2190]: I0113 20:22:40.086944 2190 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:22:40.087536 kubelet[2190]: I0113 20:22:40.087171 2190 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:22:40.087536 kubelet[2190]: I0113 20:22:40.087183 2190 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:22:40.087536 kubelet[2190]: I0113 20:22:40.087491 2190 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:22:40.089296 kubelet[2190]: E0113 20:22:40.089238 2190 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:22:40.177721 systemd[1]: Created slice kubepods-burstable-pod024019a0701947f67cb154d619c4da36.slice - libcontainer container kubepods-burstable-pod024019a0701947f67cb154d619c4da36.slice. Jan 13 20:22:40.190911 kubelet[2190]: I0113 20:22:40.190865 2190 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:22:40.191364 kubelet[2190]: E0113 20:22:40.191321 2190 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" Jan 13 20:22:40.191652 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Jan 13 20:22:40.203217 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Jan 13 20:22:40.241007 kubelet[2190]: E0113 20:22:40.240966 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="400ms" Jan 13 20:22:40.242275 kubelet[2190]: I0113 20:22:40.242019 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:22:40.242275 kubelet[2190]: I0113 20:22:40.242054 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:22:40.242275 kubelet[2190]: I0113 20:22:40.242091 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/024019a0701947f67cb154d619c4da36-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"024019a0701947f67cb154d619c4da36\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:22:40.242275 kubelet[2190]: I0113 20:22:40.242106 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/024019a0701947f67cb154d619c4da36-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"024019a0701947f67cb154d619c4da36\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:22:40.242275 kubelet[2190]: I0113 20:22:40.242121 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/024019a0701947f67cb154d619c4da36-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"024019a0701947f67cb154d619c4da36\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:22:40.242436 kubelet[2190]: I0113 20:22:40.242135 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:22:40.242436 kubelet[2190]: I0113 20:22:40.242148 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:22:40.242436 kubelet[2190]: I0113 20:22:40.242164 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:22:40.242436 kubelet[2190]: I0113 20:22:40.242179 2190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:22:40.392700 kubelet[2190]: I0113 20:22:40.392671 2190 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:22:40.393034 kubelet[2190]: E0113 20:22:40.392997 2190 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" Jan 13 20:22:40.493820 kubelet[2190]: E0113 20:22:40.493719 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:40.494442 containerd[1472]: time="2025-01-13T20:22:40.494403650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:024019a0701947f67cb154d619c4da36,Namespace:kube-system,Attempt:0,}" Jan 13 20:22:40.502401 kubelet[2190]: E0113 20:22:40.502184 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:40.502598 containerd[1472]: time="2025-01-13T20:22:40.502551969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Jan 13 20:22:40.505559 kubelet[2190]: E0113 20:22:40.505069 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:40.505613 containerd[1472]: time="2025-01-13T20:22:40.505363926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Jan 13 20:22:40.642258 kubelet[2190]: E0113 20:22:40.642208 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="800ms" Jan 13 20:22:40.795049 kubelet[2190]: I0113 20:22:40.794958 2190 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:22:40.795767 kubelet[2190]: E0113 20:22:40.795714 2190 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" Jan 13 20:22:40.873533 kubelet[2190]: W0113 20:22:40.873447 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused Jan 13 20:22:40.873533 kubelet[2190]: E0113 20:22:40.873509 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:22:40.942229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3000562870.mount: Deactivated successfully. Jan 13 20:22:40.951049 containerd[1472]: time="2025-01-13T20:22:40.950986959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:22:40.952748 containerd[1472]: time="2025-01-13T20:22:40.952646875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 20:22:40.953404 containerd[1472]: time="2025-01-13T20:22:40.953311689Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:22:40.956757 containerd[1472]: time="2025-01-13T20:22:40.956712111Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:22:40.960702 containerd[1472]: time="2025-01-13T20:22:40.957806730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:22:40.960702 containerd[1472]: time="2025-01-13T20:22:40.959330153Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 464.848033ms" Jan 13 20:22:40.960702 containerd[1472]: time="2025-01-13T20:22:40.960091725Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:22:40.961349 containerd[1472]: time="2025-01-13T20:22:40.961304309Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:22:40.966368 containerd[1472]: time="2025-01-13T20:22:40.966207866Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:22:40.971571 containerd[1472]: time="2025-01-13T20:22:40.970100436Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 464.687292ms" Jan 13 20:22:40.975385 containerd[1472]: time="2025-01-13T20:22:40.975354928Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 472.736813ms" Jan 13 20:22:41.069681 kubelet[2190]: E0113 20:22:41.069487 2190 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.109:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5a2e06923a07 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:22:40.029989383 +0000 UTC m=+1.080518876,LastTimestamp:2025-01-13 20:22:40.029989383 +0000 UTC m=+1.080518876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:22:41.088790 containerd[1472]: time="2025-01-13T20:22:41.088656647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:22:41.088790 containerd[1472]: time="2025-01-13T20:22:41.088733234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:22:41.088790 containerd[1472]: time="2025-01-13T20:22:41.088754202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:22:41.089440 containerd[1472]: time="2025-01-13T20:22:41.088835991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:22:41.089440 containerd[1472]: time="2025-01-13T20:22:41.089304239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:22:41.089440 containerd[1472]: time="2025-01-13T20:22:41.089352216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:22:41.089440 containerd[1472]: time="2025-01-13T20:22:41.089371503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:22:41.089521 containerd[1472]: time="2025-01-13T20:22:41.089473180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:22:41.091905 containerd[1472]: time="2025-01-13T20:22:41.091384426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:22:41.091905 containerd[1472]: time="2025-01-13T20:22:41.091423120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:22:41.091905 containerd[1472]: time="2025-01-13T20:22:41.091433443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:22:41.091905 containerd[1472]: time="2025-01-13T20:22:41.091524516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:22:41.113738 systemd[1]: Started cri-containerd-d9c4f8d54aab7cfc6880199a5fbc020a00eaf4d3b916f88f6373f2966db192a0.scope - libcontainer container d9c4f8d54aab7cfc6880199a5fbc020a00eaf4d3b916f88f6373f2966db192a0. Jan 13 20:22:41.118323 systemd[1]: Started cri-containerd-3162f692c3eafbc798b6b9e4c043464494cde071fbcbac621f53dadd6dfe72c7.scope - libcontainer container 3162f692c3eafbc798b6b9e4c043464494cde071fbcbac621f53dadd6dfe72c7. Jan 13 20:22:41.120098 systemd[1]: Started cri-containerd-97163e26dfd5b6fda488c050c4cac449a85288f6de8df21326b6c54943f5a64c.scope - libcontainer container 97163e26dfd5b6fda488c050c4cac449a85288f6de8df21326b6c54943f5a64c. Jan 13 20:22:41.156169 containerd[1472]: time="2025-01-13T20:22:41.156050955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:024019a0701947f67cb154d619c4da36,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9c4f8d54aab7cfc6880199a5fbc020a00eaf4d3b916f88f6373f2966db192a0\"" Jan 13 20:22:41.158549 containerd[1472]: time="2025-01-13T20:22:41.158489230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"3162f692c3eafbc798b6b9e4c043464494cde071fbcbac621f53dadd6dfe72c7\"" Jan 13 20:22:41.158990 kubelet[2190]: E0113 20:22:41.158954 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:41.159650 kubelet[2190]: E0113 20:22:41.159599 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:41.163419 containerd[1472]: time="2025-01-13T20:22:41.163337730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"97163e26dfd5b6fda488c050c4cac449a85288f6de8df21326b6c54943f5a64c\"" Jan 13 20:22:41.163836 kubelet[2190]: E0113 20:22:41.163809 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:41.164689 containerd[1472]: time="2025-01-13T20:22:41.164658804Z" level=info msg="CreateContainer within sandbox \"3162f692c3eafbc798b6b9e4c043464494cde071fbcbac621f53dadd6dfe72c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:22:41.165929 containerd[1472]: time="2025-01-13T20:22:41.165823982Z" level=info msg="CreateContainer within sandbox \"97163e26dfd5b6fda488c050c4cac449a85288f6de8df21326b6c54943f5a64c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:22:41.166352 containerd[1472]: time="2025-01-13T20:22:41.166326843Z" level=info msg="CreateContainer within sandbox \"d9c4f8d54aab7cfc6880199a5fbc020a00eaf4d3b916f88f6373f2966db192a0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:22:41.179330 containerd[1472]: time="2025-01-13T20:22:41.179191500Z" level=info msg="CreateContainer within sandbox \"3162f692c3eafbc798b6b9e4c043464494cde071fbcbac621f53dadd6dfe72c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a16df545aeca8f48904c791755498f22c41fd71b43bd814c21f3a6ce7c8039c2\"" Jan 13 20:22:41.179798 containerd[1472]: time="2025-01-13T20:22:41.179772148Z" level=info msg="StartContainer for \"a16df545aeca8f48904c791755498f22c41fd71b43bd814c21f3a6ce7c8039c2\"" Jan 13 20:22:41.188506 containerd[1472]: time="2025-01-13T20:22:41.188472591Z" level=info msg="CreateContainer within sandbox \"97163e26dfd5b6fda488c050c4cac449a85288f6de8df21326b6c54943f5a64c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d9bce4228afddfd0832bb5755d0c2c9c3e1d2cd4af262dbf136f2762ff15e3db\"" Jan 13 20:22:41.189051 containerd[1472]: time="2025-01-13T20:22:41.189027590Z" level=info msg="StartContainer for \"d9bce4228afddfd0832bb5755d0c2c9c3e1d2cd4af262dbf136f2762ff15e3db\"" Jan 13 20:22:41.190756 containerd[1472]: time="2025-01-13T20:22:41.190659256Z" level=info msg="CreateContainer within sandbox \"d9c4f8d54aab7cfc6880199a5fbc020a00eaf4d3b916f88f6373f2966db192a0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a20aabb7fadaf2e7ea4673f17173ddb1eaddf225c11da0acae7deeb13d9be70a\"" Jan 13 20:22:41.191431 containerd[1472]: time="2025-01-13T20:22:41.191379434Z" level=info msg="StartContainer for \"a20aabb7fadaf2e7ea4673f17173ddb1eaddf225c11da0acae7deeb13d9be70a\"" Jan 13 20:22:41.209216 systemd[1]: Started cri-containerd-a16df545aeca8f48904c791755498f22c41fd71b43bd814c21f3a6ce7c8039c2.scope - libcontainer container a16df545aeca8f48904c791755498f22c41fd71b43bd814c21f3a6ce7c8039c2. Jan 13 20:22:41.212544 systemd[1]: Started cri-containerd-d9bce4228afddfd0832bb5755d0c2c9c3e1d2cd4af262dbf136f2762ff15e3db.scope - libcontainer container d9bce4228afddfd0832bb5755d0c2c9c3e1d2cd4af262dbf136f2762ff15e3db. Jan 13 20:22:41.240384 systemd[1]: Started cri-containerd-a20aabb7fadaf2e7ea4673f17173ddb1eaddf225c11da0acae7deeb13d9be70a.scope - libcontainer container a20aabb7fadaf2e7ea4673f17173ddb1eaddf225c11da0acae7deeb13d9be70a. Jan 13 20:22:41.253749 containerd[1472]: time="2025-01-13T20:22:41.253623574Z" level=info msg="StartContainer for \"a16df545aeca8f48904c791755498f22c41fd71b43bd814c21f3a6ce7c8039c2\" returns successfully" Jan 13 20:22:41.284095 containerd[1472]: time="2025-01-13T20:22:41.280247129Z" level=info msg="StartContainer for \"d9bce4228afddfd0832bb5755d0c2c9c3e1d2cd4af262dbf136f2762ff15e3db\" returns successfully" Jan 13 20:22:41.299232 containerd[1472]: time="2025-01-13T20:22:41.299127745Z" level=info msg="StartContainer for \"a20aabb7fadaf2e7ea4673f17173ddb1eaddf225c11da0acae7deeb13d9be70a\" returns successfully" Jan 13 20:22:41.353757 kubelet[2190]: W0113 20:22:41.353589 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused Jan 13 20:22:41.353757 kubelet[2190]: E0113 20:22:41.353660 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:22:41.373440 kubelet[2190]: W0113 20:22:41.373341 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused Jan 13 20:22:41.373440 kubelet[2190]: E0113 20:22:41.373407 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:22:41.443296 kubelet[2190]: E0113 20:22:41.443242 2190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="1.6s" Jan 13 20:22:41.444546 kubelet[2190]: W0113 20:22:41.444497 2190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused Jan 13 20:22:41.444600 kubelet[2190]: E0113 20:22:41.444558 2190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:22:41.601954 kubelet[2190]: I0113 20:22:41.601919 2190 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:22:42.066569 kubelet[2190]: E0113 20:22:42.066343 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:42.070070 kubelet[2190]: E0113 20:22:42.067726 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:42.074365 kubelet[2190]: E0113 20:22:42.074335 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:42.919719 kubelet[2190]: I0113 20:22:42.919674 2190 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 20:22:42.919719 kubelet[2190]: E0113 20:22:42.919717 2190 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 13 20:22:42.929947 kubelet[2190]: E0113 20:22:42.929919 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:22:43.030192 kubelet[2190]: E0113 20:22:43.030126 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:22:43.076448 kubelet[2190]: E0113 20:22:43.076419 2190 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:43.130999 kubelet[2190]: E0113 20:22:43.130942 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:22:43.231935 kubelet[2190]: E0113 20:22:43.231811 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:22:43.332458 kubelet[2190]: E0113 20:22:43.332409 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:22:43.432986 kubelet[2190]: E0113 20:22:43.432947 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:22:43.533952 kubelet[2190]: E0113 20:22:43.533837 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:22:43.634427 kubelet[2190]: E0113 20:22:43.634373 2190 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:22:44.019741 kubelet[2190]: I0113 20:22:44.019617 2190 apiserver.go:52] "Watching apiserver" Jan 13 20:22:44.039719 kubelet[2190]: I0113 20:22:44.039602 2190 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:22:44.911783 systemd[1]: Reloading requested from client PID 2467 ('systemctl') (unit session-7.scope)... Jan 13 20:22:44.911798 systemd[1]: Reloading... Jan 13 20:22:44.983101 zram_generator::config[2506]: No configuration found. Jan 13 20:22:45.079969 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:22:45.142677 systemd[1]: Reloading finished in 230 ms. Jan 13 20:22:45.175255 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:45.185497 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:22:45.185689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:45.185728 systemd[1]: kubelet.service: Consumed 1.473s CPU time, 117.8M memory peak, 0B memory swap peak. Jan 13 20:22:45.196332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:45.288998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:45.294126 (kubelet)[2548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:22:45.333207 kubelet[2548]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:22:45.333207 kubelet[2548]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:22:45.333207 kubelet[2548]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:22:45.333532 kubelet[2548]: I0113 20:22:45.333256 2548 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:22:45.339901 kubelet[2548]: I0113 20:22:45.338301 2548 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:22:45.339901 kubelet[2548]: I0113 20:22:45.338326 2548 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:22:45.339901 kubelet[2548]: I0113 20:22:45.338516 2548 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:22:45.340167 kubelet[2548]: I0113 20:22:45.340147 2548 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:22:45.342161 kubelet[2548]: I0113 20:22:45.342089 2548 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:22:45.345854 kubelet[2548]: E0113 20:22:45.345823 2548 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:22:45.346038 kubelet[2548]: I0113 20:22:45.346020 2548 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:22:45.348209 kubelet[2548]: I0113 20:22:45.348168 2548 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:22:45.348352 kubelet[2548]: I0113 20:22:45.348295 2548 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:22:45.348409 kubelet[2548]: I0113 20:22:45.348385 2548 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:22:45.348558 kubelet[2548]: I0113 20:22:45.348406 2548 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:22:45.348629 kubelet[2548]: I0113 20:22:45.348562 2548 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:22:45.348629 kubelet[2548]: I0113 20:22:45.348571 2548 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:22:45.348629 kubelet[2548]: I0113 20:22:45.348604 2548 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:22:45.348712 kubelet[2548]: I0113 20:22:45.348698 2548 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:22:45.348712 kubelet[2548]: I0113 20:22:45.348710 2548 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:22:45.348763 kubelet[2548]: I0113 20:22:45.348728 2548 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:22:45.348763 kubelet[2548]: I0113 20:22:45.348738 2548 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:22:45.351764 kubelet[2548]: I0113 20:22:45.349184 2548 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:22:45.351764 kubelet[2548]: I0113 20:22:45.349589 2548 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:22:45.351764 kubelet[2548]: I0113 20:22:45.349912 2548 server.go:1269] "Started kubelet" Jan 13 20:22:45.351764 kubelet[2548]: I0113 20:22:45.350178 2548 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:22:45.351764 kubelet[2548]: I0113 20:22:45.350332 2548 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:22:45.351764 kubelet[2548]: I0113 20:22:45.350547 2548 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:22:45.351764 kubelet[2548]: I0113 20:22:45.351598 2548 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:22:45.352395 kubelet[2548]: I0113 20:22:45.352367 2548 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:22:45.352554 kubelet[2548]: I0113 20:22:45.352525 2548 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:22:45.353819 kubelet[2548]: I0113 20:22:45.353787 2548 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:22:45.353890 kubelet[2548]: I0113 20:22:45.353877 2548 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:22:45.354011 kubelet[2548]: I0113 20:22:45.353988 2548 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:22:45.355006 kubelet[2548]: E0113 20:22:45.354965 2548 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:22:45.355336 kubelet[2548]: I0113 20:22:45.355300 2548 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:22:45.355796 kubelet[2548]: E0113 20:22:45.355764 2548 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:22:45.361133 kubelet[2548]: I0113 20:22:45.358624 2548 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:22:45.361133 kubelet[2548]: I0113 20:22:45.358645 2548 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:22:45.380302 kubelet[2548]: I0113 20:22:45.379889 2548 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:22:45.381196 kubelet[2548]: I0113 20:22:45.381035 2548 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:22:45.381196 kubelet[2548]: I0113 20:22:45.381105 2548 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:22:45.381196 kubelet[2548]: I0113 20:22:45.381128 2548 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:22:45.381196 kubelet[2548]: E0113 20:22:45.381185 2548 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:22:45.401435 kubelet[2548]: I0113 20:22:45.401398 2548 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:22:45.401435 kubelet[2548]: I0113 20:22:45.401417 2548 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:22:45.401435 kubelet[2548]: I0113 20:22:45.401435 2548 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:22:45.401585 kubelet[2548]: I0113 20:22:45.401569 2548 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:22:45.401607 kubelet[2548]: I0113 20:22:45.401580 2548 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:22:45.401607 kubelet[2548]: I0113 20:22:45.401596 2548 policy_none.go:49] "None policy: Start" Jan 13 20:22:45.402115 kubelet[2548]: I0113 20:22:45.402095 2548 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:22:45.402115 kubelet[2548]: I0113 20:22:45.402118 2548 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:22:45.402245 kubelet[2548]: I0113 20:22:45.402228 2548 state_mem.go:75] "Updated machine memory state" Jan 13 20:22:45.408486 kubelet[2548]: I0113 20:22:45.408461 2548 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:22:45.408749 kubelet[2548]: I0113 20:22:45.408721 2548 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:22:45.408797 kubelet[2548]: I0113 20:22:45.408741 2548 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:22:45.409469 kubelet[2548]: I0113 20:22:45.408994 2548 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:22:45.513497 kubelet[2548]: I0113 20:22:45.513390 2548 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 13 20:22:45.521185 kubelet[2548]: I0113 20:22:45.521147 2548 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 13 20:22:45.521287 kubelet[2548]: I0113 20:22:45.521231 2548 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 13 20:22:45.555140 kubelet[2548]: I0113 20:22:45.555051 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/024019a0701947f67cb154d619c4da36-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"024019a0701947f67cb154d619c4da36\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:22:45.555140 kubelet[2548]: I0113 20:22:45.555107 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/024019a0701947f67cb154d619c4da36-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"024019a0701947f67cb154d619c4da36\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:22:45.555140 kubelet[2548]: I0113 20:22:45.555140 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:22:45.555348 kubelet[2548]: I0113 20:22:45.555161 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:22:45.555348 kubelet[2548]: I0113 20:22:45.555178 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/024019a0701947f67cb154d619c4da36-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"024019a0701947f67cb154d619c4da36\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:22:45.555348 kubelet[2548]: I0113 20:22:45.555230 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:22:45.555348 kubelet[2548]: I0113 20:22:45.555284 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:22:45.555348 kubelet[2548]: I0113 20:22:45.555308 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:22:45.555461 kubelet[2548]: I0113 20:22:45.555338 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:22:45.800653 kubelet[2548]: E0113 20:22:45.800486 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:45.801653 kubelet[2548]: E0113 20:22:45.801582 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:45.801737 kubelet[2548]: E0113 20:22:45.801702 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:45.912384 sudo[2583]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:22:45.912653 sudo[2583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:22:46.329466 sudo[2583]: pam_unix(sudo:session): session closed for user root Jan 13 20:22:46.350963 kubelet[2548]: I0113 20:22:46.349492 2548 apiserver.go:52] "Watching apiserver" Jan 13 20:22:46.355015 kubelet[2548]: I0113 20:22:46.354993 2548 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:22:46.392908 kubelet[2548]: E0113 20:22:46.392193 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:46.392908 kubelet[2548]: E0113 20:22:46.392710 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:46.399244 kubelet[2548]: E0113 20:22:46.399178 2548 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:22:46.399462 kubelet[2548]: E0113 20:22:46.399427 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:46.411316 kubelet[2548]: I0113 20:22:46.411081 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.411050027 podStartE2EDuration="1.411050027s" podCreationTimestamp="2025-01-13 20:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:22:46.410835771 +0000 UTC m=+1.113572577" watchObservedRunningTime="2025-01-13 20:22:46.411050027 +0000 UTC m=+1.113786833" Jan 13 20:22:46.425453 kubelet[2548]: I0113 20:22:46.425227 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.425210708 podStartE2EDuration="1.425210708s" podCreationTimestamp="2025-01-13 20:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:22:46.425116844 +0000 UTC m=+1.127853650" watchObservedRunningTime="2025-01-13 20:22:46.425210708 +0000 UTC m=+1.127947474" Jan 13 20:22:46.425453 kubelet[2548]: I0113 20:22:46.425346 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4253421419999999 podStartE2EDuration="1.425342142s" podCreationTimestamp="2025-01-13 20:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:22:46.418008236 +0000 UTC m=+1.120745042" watchObservedRunningTime="2025-01-13 20:22:46.425342142 +0000 UTC m=+1.128078908" Jan 13 20:22:47.393624 kubelet[2548]: E0113 20:22:47.393575 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:48.062285 sudo[1651]: pam_unix(sudo:session): session closed for user root Jan 13 20:22:48.063365 sshd[1650]: Connection closed by 10.0.0.1 port 36514 Jan 13 20:22:48.063693 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Jan 13 20:22:48.066450 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:22:48.066635 systemd[1]: session-7.scope: Consumed 6.100s CPU time, 150.9M memory peak, 0B memory swap peak. Jan 13 20:22:48.067732 systemd[1]: sshd@6-10.0.0.109:22-10.0.0.1:36514.service: Deactivated successfully. Jan 13 20:22:48.069691 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:22:48.070974 systemd-logind[1453]: Removed session 7. Jan 13 20:22:51.658041 kubelet[2548]: I0113 20:22:51.657998 2548 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:22:51.659334 containerd[1472]: time="2025-01-13T20:22:51.659285030Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:22:51.659597 kubelet[2548]: I0113 20:22:51.659512 2548 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:22:52.149885 kubelet[2548]: E0113 20:22:52.149747 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:52.403782 kubelet[2548]: E0113 20:22:52.401109 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:52.627585 kubelet[2548]: E0113 20:22:52.626301 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:52.698156 systemd[1]: Created slice kubepods-besteffort-podc348a4f5_01ae_491d_8508_420faa452074.slice - libcontainer container kubepods-besteffort-podc348a4f5_01ae_491d_8508_420faa452074.slice. Jan 13 20:22:52.706113 kubelet[2548]: I0113 20:22:52.705108 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7hlr\" (UniqueName: \"kubernetes.io/projected/47c5d4fa-0c6d-44dc-af1d-c3953839d618-kube-api-access-m7hlr\") pod \"cilium-489pd\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " pod="kube-system/cilium-489pd" Jan 13 20:22:52.706113 kubelet[2548]: I0113 20:22:52.705141 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4hnn\" (UniqueName: \"kubernetes.io/projected/c348a4f5-01ae-491d-8508-420faa452074-kube-api-access-c4hnn\") pod \"kube-proxy-tnjbf\" (UID: \"c348a4f5-01ae-491d-8508-420faa452074\") " pod="kube-system/kube-proxy-tnjbf" Jan 13 20:22:52.706113 kubelet[2548]: I0113 20:22:52.705161 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47c5d4fa-0c6d-44dc-af1d-c3953839d618-clustermesh-secrets\") pod \"cilium-489pd\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " pod="kube-system/cilium-489pd" Jan 13 20:22:52.706113 kubelet[2548]: I0113 20:22:52.705177 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cilium-config-path\") pod \"cilium-489pd\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " pod="kube-system/cilium-489pd" Jan 13 20:22:52.706113 kubelet[2548]: I0113 20:22:52.705193 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c348a4f5-01ae-491d-8508-420faa452074-kube-proxy\") pod \"kube-proxy-tnjbf\" (UID: \"c348a4f5-01ae-491d-8508-420faa452074\") " pod="kube-system/kube-proxy-tnjbf" Jan 13 20:22:52.706522 kubelet[2548]: I0113 20:22:52.705206 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c348a4f5-01ae-491d-8508-420faa452074-lib-modules\") pod \"kube-proxy-tnjbf\" (UID: \"c348a4f5-01ae-491d-8508-420faa452074\") " pod="kube-system/kube-proxy-tnjbf" Jan 13 20:22:52.706522 kubelet[2548]: I0113 20:22:52.705220 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cilium-run\") pod \"cilium-489pd\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " pod="kube-system/cilium-489pd" Jan 13 20:22:52.706522 kubelet[2548]: I0113 20:22:52.705423 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-host-proc-sys-net\") pod \"cilium-489pd\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " pod="kube-system/cilium-489pd" Jan 13 20:22:52.706522 kubelet[2548]: I0113 20:22:52.705446 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-bpf-maps\") pod \"cilium-489pd\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " pod="kube-system/cilium-489pd" Jan 13 20:22:52.706522 kubelet[2548]: I0113 20:22:52.705477 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cilium-cgroup\") pod \"cilium-489pd\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " pod="kube-system/cilium-489pd" Jan 13 20:22:52.706522 kubelet[2548]: I0113 20:22:52.705498 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-etc-cni-netd\") pod \"cilium-489pd\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " pod="kube-system/cilium-489pd" Jan 13 20:22:52.706651 kubelet[2548]: I0113 20:22:52.705521 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47c5d4fa-0c6d-44dc-af1d-c3953839d618-hubble-tls\") pod \"cilium-489pd\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " pod="kube-system/cilium-489pd" Jan 13 20:22:52.706651 kubelet[2548]: I0113 20:22:52.705566 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cni-path\") pod \"cilium-489pd\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " pod="kube-system/cilium-489pd" Jan 13 20:22:52.706651 kubelet[2548]: I0113 20:22:52.705583 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-xtables-lock\") pod \"cilium-489pd\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " pod="kube-system/cilium-489pd" Jan 13 20:22:52.706651 kubelet[2548]: I0113 20:22:52.705613 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-host-proc-sys-kernel\") pod \"cilium-489pd\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " pod="kube-system/cilium-489pd" Jan 13 20:22:52.706651 kubelet[2548]: I0113 20:22:52.705638 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c348a4f5-01ae-491d-8508-420faa452074-xtables-lock\") pod \"kube-proxy-tnjbf\" (UID: \"c348a4f5-01ae-491d-8508-420faa452074\") " pod="kube-system/kube-proxy-tnjbf" Jan 13 20:22:52.706651 kubelet[2548]: I0113 20:22:52.705949 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-hostproc\") pod \"cilium-489pd\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " pod="kube-system/cilium-489pd" Jan 13 20:22:52.706856 kubelet[2548]: I0113 20:22:52.705976 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-lib-modules\") pod \"cilium-489pd\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " pod="kube-system/cilium-489pd" Jan 13 20:22:52.708142 systemd[1]: Created slice kubepods-burstable-pod47c5d4fa_0c6d_44dc_af1d_c3953839d618.slice - libcontainer container kubepods-burstable-pod47c5d4fa_0c6d_44dc_af1d_c3953839d618.slice. Jan 13 20:22:52.749366 systemd[1]: Created slice kubepods-besteffort-pod87c746d3_fb4c_4b75_921c_6c8140db1ae4.slice - libcontainer container kubepods-besteffort-pod87c746d3_fb4c_4b75_921c_6c8140db1ae4.slice. Jan 13 20:22:52.806326 kubelet[2548]: I0113 20:22:52.806280 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87c746d3-fb4c-4b75-921c-6c8140db1ae4-cilium-config-path\") pod \"cilium-operator-5d85765b45-cjp22\" (UID: \"87c746d3-fb4c-4b75-921c-6c8140db1ae4\") " pod="kube-system/cilium-operator-5d85765b45-cjp22" Jan 13 20:22:52.806326 kubelet[2548]: I0113 20:22:52.806333 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhsf9\" (UniqueName: \"kubernetes.io/projected/87c746d3-fb4c-4b75-921c-6c8140db1ae4-kube-api-access-rhsf9\") pod \"cilium-operator-5d85765b45-cjp22\" (UID: \"87c746d3-fb4c-4b75-921c-6c8140db1ae4\") " pod="kube-system/cilium-operator-5d85765b45-cjp22" Jan 13 20:22:53.010031 kubelet[2548]: E0113 20:22:53.009899 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:53.010732 kubelet[2548]: E0113 20:22:53.010674 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:53.011313 containerd[1472]: time="2025-01-13T20:22:53.011273653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-489pd,Uid:47c5d4fa-0c6d-44dc-af1d-c3953839d618,Namespace:kube-system,Attempt:0,}" Jan 13 20:22:53.011717 containerd[1472]: time="2025-01-13T20:22:53.011291015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tnjbf,Uid:c348a4f5-01ae-491d-8508-420faa452074,Namespace:kube-system,Attempt:0,}" Jan 13 20:22:53.036513 containerd[1472]: time="2025-01-13T20:22:53.036414813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:22:53.036513 containerd[1472]: time="2025-01-13T20:22:53.036470102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:22:53.036513 containerd[1472]: time="2025-01-13T20:22:53.036486665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:22:53.036776 containerd[1472]: time="2025-01-13T20:22:53.036574199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:22:53.039631 containerd[1472]: time="2025-01-13T20:22:53.039546451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:22:53.039768 containerd[1472]: time="2025-01-13T20:22:53.039664190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:22:53.039768 containerd[1472]: time="2025-01-13T20:22:53.039723440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:22:53.040214 containerd[1472]: time="2025-01-13T20:22:53.039834498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:22:53.053629 kubelet[2548]: E0113 20:22:53.053478 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:53.054215 containerd[1472]: time="2025-01-13T20:22:53.054182273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-cjp22,Uid:87c746d3-fb4c-4b75-921c-6c8140db1ae4,Namespace:kube-system,Attempt:0,}" Jan 13 20:22:53.055604 systemd[1]: Started cri-containerd-e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab.scope - libcontainer container e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab. Jan 13 20:22:53.059771 systemd[1]: Started cri-containerd-887d07032252caed862c78b7abbe6839a47b19c4acf80687209090fca9767a1f.scope - libcontainer container 887d07032252caed862c78b7abbe6839a47b19c4acf80687209090fca9767a1f. Jan 13 20:22:53.082450 containerd[1472]: time="2025-01-13T20:22:53.082351454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-489pd,Uid:47c5d4fa-0c6d-44dc-af1d-c3953839d618,Namespace:kube-system,Attempt:0,} returns sandbox id \"e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab\"" Jan 13 20:22:53.084588 kubelet[2548]: E0113 20:22:53.083260 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:53.085897 containerd[1472]: time="2025-01-13T20:22:53.085864595Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:22:53.090312 containerd[1472]: time="2025-01-13T20:22:53.090263923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tnjbf,Uid:c348a4f5-01ae-491d-8508-420faa452074,Namespace:kube-system,Attempt:0,} returns sandbox id \"887d07032252caed862c78b7abbe6839a47b19c4acf80687209090fca9767a1f\"" Jan 13 20:22:53.091039 kubelet[2548]: E0113 20:22:53.091009 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:53.094078 containerd[1472]: time="2025-01-13T20:22:53.094024425Z" level=info msg="CreateContainer within sandbox \"887d07032252caed862c78b7abbe6839a47b19c4acf80687209090fca9767a1f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:22:53.099317 containerd[1472]: time="2025-01-13T20:22:53.097942474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:22:53.099317 containerd[1472]: time="2025-01-13T20:22:53.097998243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:22:53.099317 containerd[1472]: time="2025-01-13T20:22:53.098012605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:22:53.099317 containerd[1472]: time="2025-01-13T20:22:53.098145827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:22:53.111822 containerd[1472]: time="2025-01-13T20:22:53.111776283Z" level=info msg="CreateContainer within sandbox \"887d07032252caed862c78b7abbe6839a47b19c4acf80687209090fca9767a1f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f62874873064a8d387504fbe0f43df769fb7f7dd179d6031f3464846be7dd71\"" Jan 13 20:22:53.112433 containerd[1472]: time="2025-01-13T20:22:53.112367260Z" level=info msg="StartContainer for \"4f62874873064a8d387504fbe0f43df769fb7f7dd179d6031f3464846be7dd71\"" Jan 13 20:22:53.119266 systemd[1]: Started cri-containerd-ca48b6d2bdac02473a1c977a9e95b2ffba5489969e09be7e26903c5ef1a17e8b.scope - libcontainer container ca48b6d2bdac02473a1c977a9e95b2ffba5489969e09be7e26903c5ef1a17e8b. Jan 13 20:22:53.149298 systemd[1]: Started cri-containerd-4f62874873064a8d387504fbe0f43df769fb7f7dd179d6031f3464846be7dd71.scope - libcontainer container 4f62874873064a8d387504fbe0f43df769fb7f7dd179d6031f3464846be7dd71. Jan 13 20:22:53.160112 containerd[1472]: time="2025-01-13T20:22:53.160035708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-cjp22,Uid:87c746d3-fb4c-4b75-921c-6c8140db1ae4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca48b6d2bdac02473a1c977a9e95b2ffba5489969e09be7e26903c5ef1a17e8b\"" Jan 13 20:22:53.160973 kubelet[2548]: E0113 20:22:53.160951 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:53.179317 containerd[1472]: time="2025-01-13T20:22:53.179274811Z" level=info msg="StartContainer for \"4f62874873064a8d387504fbe0f43df769fb7f7dd179d6031f3464846be7dd71\" returns successfully" Jan 13 20:22:53.405298 kubelet[2548]: E0113 20:22:53.405207 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:53.405298 kubelet[2548]: E0113 20:22:53.405255 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:53.423903 kubelet[2548]: I0113 20:22:53.423850 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tnjbf" podStartSLOduration=1.423834998 podStartE2EDuration="1.423834998s" podCreationTimestamp="2025-01-13 20:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:22:53.42269425 +0000 UTC m=+8.125431176" watchObservedRunningTime="2025-01-13 20:22:53.423834998 +0000 UTC m=+8.126571764" Jan 13 20:22:54.283006 kubelet[2548]: E0113 20:22:54.282678 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:54.406117 kubelet[2548]: E0113 20:22:54.406075 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:22:57.086187 update_engine[1461]: I20250113 20:22:57.086107 1461 update_attempter.cc:509] Updating boot flags... Jan 13 20:22:57.130705 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2926) Jan 13 20:22:57.158090 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2927) Jan 13 20:23:08.838445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933228434.mount: Deactivated successfully. Jan 13 20:23:10.319568 containerd[1472]: time="2025-01-13T20:23:10.319509762Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:10.320548 containerd[1472]: time="2025-01-13T20:23:10.320503417Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651554" Jan 13 20:23:10.322222 containerd[1472]: time="2025-01-13T20:23:10.322165909Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:10.323562 containerd[1472]: time="2025-01-13T20:23:10.323475981Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 17.237569421s" Jan 13 20:23:10.323562 containerd[1472]: time="2025-01-13T20:23:10.323511263Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:23:10.326306 containerd[1472]: time="2025-01-13T20:23:10.326276696Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:23:10.334633 containerd[1472]: time="2025-01-13T20:23:10.334584435Z" level=info msg="CreateContainer within sandbox \"e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:23:10.360576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1391489255.mount: Deactivated successfully. Jan 13 20:23:10.362873 containerd[1472]: time="2025-01-13T20:23:10.362831355Z" level=info msg="CreateContainer within sandbox \"e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7\"" Jan 13 20:23:10.364093 containerd[1472]: time="2025-01-13T20:23:10.363320382Z" level=info msg="StartContainer for \"5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7\"" Jan 13 20:23:10.388212 systemd[1]: Started cri-containerd-5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7.scope - libcontainer container 5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7. Jan 13 20:23:10.415557 containerd[1472]: time="2025-01-13T20:23:10.415509705Z" level=info msg="StartContainer for \"5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7\" returns successfully" Jan 13 20:23:10.461644 systemd[1]: cri-containerd-5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7.scope: Deactivated successfully. Jan 13 20:23:10.468751 kubelet[2548]: E0113 20:23:10.468474 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:10.652144 containerd[1472]: time="2025-01-13T20:23:10.651989369Z" level=info msg="shim disconnected" id=5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7 namespace=k8s.io Jan 13 20:23:10.652144 containerd[1472]: time="2025-01-13T20:23:10.652043852Z" level=warning msg="cleaning up after shim disconnected" id=5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7 namespace=k8s.io Jan 13 20:23:10.652144 containerd[1472]: time="2025-01-13T20:23:10.652052172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:23:11.359003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7-rootfs.mount: Deactivated successfully. Jan 13 20:23:11.459440 kubelet[2548]: E0113 20:23:11.459011 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:11.462793 containerd[1472]: time="2025-01-13T20:23:11.462425400Z" level=info msg="CreateContainer within sandbox \"e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:23:11.492471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2007750950.mount: Deactivated successfully. Jan 13 20:23:11.496125 containerd[1472]: time="2025-01-13T20:23:11.496087063Z" level=info msg="CreateContainer within sandbox \"e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b\"" Jan 13 20:23:11.496586 containerd[1472]: time="2025-01-13T20:23:11.496565088Z" level=info msg="StartContainer for \"9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b\"" Jan 13 20:23:11.525255 systemd[1]: Started cri-containerd-9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b.scope - libcontainer container 9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b. Jan 13 20:23:11.555837 containerd[1472]: time="2025-01-13T20:23:11.555788315Z" level=info msg="StartContainer for \"9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b\" returns successfully" Jan 13 20:23:11.569719 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:23:11.569920 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:23:11.569985 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:23:11.580212 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:23:11.580379 systemd[1]: cri-containerd-9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b.scope: Deactivated successfully. Jan 13 20:23:11.605219 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:23:11.611288 containerd[1472]: time="2025-01-13T20:23:11.611006775Z" level=info msg="shim disconnected" id=9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b namespace=k8s.io Jan 13 20:23:11.611288 containerd[1472]: time="2025-01-13T20:23:11.611070818Z" level=warning msg="cleaning up after shim disconnected" id=9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b namespace=k8s.io Jan 13 20:23:11.611288 containerd[1472]: time="2025-01-13T20:23:11.611082059Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:23:12.238753 systemd[1]: Started sshd@7-10.0.0.109:22-10.0.0.1:33518.service - OpenSSH per-connection server daemon (10.0.0.1:33518). Jan 13 20:23:12.283485 sshd[3087]: Accepted publickey for core from 10.0.0.1 port 33518 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:12.285030 sshd-session[3087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:12.289234 systemd-logind[1453]: New session 8 of user core. Jan 13 20:23:12.305254 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:23:12.359891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b-rootfs.mount: Deactivated successfully. Jan 13 20:23:12.433111 sshd[3089]: Connection closed by 10.0.0.1 port 33518 Jan 13 20:23:12.432953 sshd-session[3087]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:12.435547 systemd[1]: sshd@7-10.0.0.109:22-10.0.0.1:33518.service: Deactivated successfully. Jan 13 20:23:12.437143 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:23:12.438748 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:23:12.439738 systemd-logind[1453]: Removed session 8. Jan 13 20:23:12.462215 kubelet[2548]: E0113 20:23:12.462185 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:12.465535 containerd[1472]: time="2025-01-13T20:23:12.464424513Z" level=info msg="CreateContainer within sandbox \"e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:23:12.499169 containerd[1472]: time="2025-01-13T20:23:12.499074595Z" level=info msg="CreateContainer within sandbox \"e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567\"" Jan 13 20:23:12.501118 containerd[1472]: time="2025-01-13T20:23:12.500330776Z" level=info msg="StartContainer for \"1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567\"" Jan 13 20:23:12.526237 systemd[1]: Started cri-containerd-1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567.scope - libcontainer container 1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567. Jan 13 20:23:12.553150 containerd[1472]: time="2025-01-13T20:23:12.553104579Z" level=info msg="StartContainer for \"1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567\" returns successfully" Jan 13 20:23:12.563115 systemd[1]: cri-containerd-1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567.scope: Deactivated successfully. Jan 13 20:23:12.593782 containerd[1472]: time="2025-01-13T20:23:12.593706070Z" level=info msg="shim disconnected" id=1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567 namespace=k8s.io Jan 13 20:23:12.593782 containerd[1472]: time="2025-01-13T20:23:12.593780434Z" level=warning msg="cleaning up after shim disconnected" id=1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567 namespace=k8s.io Jan 13 20:23:12.593782 containerd[1472]: time="2025-01-13T20:23:12.593791154Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:23:13.359935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567-rootfs.mount: Deactivated successfully. Jan 13 20:23:13.467724 kubelet[2548]: E0113 20:23:13.467285 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:13.471079 containerd[1472]: time="2025-01-13T20:23:13.470925956Z" level=info msg="CreateContainer within sandbox \"e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:23:13.501954 containerd[1472]: time="2025-01-13T20:23:13.501825963Z" level=info msg="CreateContainer within sandbox \"e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f\"" Jan 13 20:23:13.502546 containerd[1472]: time="2025-01-13T20:23:13.502364507Z" level=info msg="StartContainer for \"1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f\"" Jan 13 20:23:13.528204 systemd[1]: Started cri-containerd-1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f.scope - libcontainer container 1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f. Jan 13 20:23:13.546244 systemd[1]: cri-containerd-1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f.scope: Deactivated successfully. Jan 13 20:23:13.546727 containerd[1472]: time="2025-01-13T20:23:13.546590561Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47c5d4fa_0c6d_44dc_af1d_c3953839d618.slice/cri-containerd-1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f.scope/memory.events\": no such file or directory" Jan 13 20:23:13.549451 containerd[1472]: time="2025-01-13T20:23:13.549420489Z" level=info msg="StartContainer for \"1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f\" returns successfully" Jan 13 20:23:13.577471 containerd[1472]: time="2025-01-13T20:23:13.577414604Z" level=info msg="shim disconnected" id=1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f namespace=k8s.io Jan 13 20:23:13.577471 containerd[1472]: time="2025-01-13T20:23:13.577469286Z" level=warning msg="cleaning up after shim disconnected" id=1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f namespace=k8s.io Jan 13 20:23:13.577471 containerd[1472]: time="2025-01-13T20:23:13.577478807Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:23:14.360118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f-rootfs.mount: Deactivated successfully. Jan 13 20:23:14.471109 kubelet[2548]: E0113 20:23:14.471080 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:14.478648 containerd[1472]: time="2025-01-13T20:23:14.478413978Z" level=info msg="CreateContainer within sandbox \"e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:23:14.503746 containerd[1472]: time="2025-01-13T20:23:14.503697937Z" level=info msg="CreateContainer within sandbox \"e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91\"" Jan 13 20:23:14.505054 containerd[1472]: time="2025-01-13T20:23:14.504244641Z" level=info msg="StartContainer for \"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91\"" Jan 13 20:23:14.531276 systemd[1]: Started cri-containerd-4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91.scope - libcontainer container 4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91. Jan 13 20:23:14.557973 containerd[1472]: time="2025-01-13T20:23:14.557886050Z" level=info msg="StartContainer for \"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91\" returns successfully" Jan 13 20:23:14.703222 kubelet[2548]: I0113 20:23:14.703098 2548 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:23:14.757969 systemd[1]: Created slice kubepods-burstable-pod07f8825a_025e_4639_8e3a_2fb009a1fb0f.slice - libcontainer container kubepods-burstable-pod07f8825a_025e_4639_8e3a_2fb009a1fb0f.slice. Jan 13 20:23:14.858983 systemd[1]: Created slice kubepods-burstable-pod74b40401_ea9c_44c4_880f_d104e02a4c5d.slice - libcontainer container kubepods-burstable-pod74b40401_ea9c_44c4_880f_d104e02a4c5d.slice. Jan 13 20:23:14.862076 kubelet[2548]: I0113 20:23:14.862027 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07f8825a-025e-4639-8e3a-2fb009a1fb0f-config-volume\") pod \"coredns-6f6b679f8f-bq45j\" (UID: \"07f8825a-025e-4639-8e3a-2fb009a1fb0f\") " pod="kube-system/coredns-6f6b679f8f-bq45j" Jan 13 20:23:14.862220 kubelet[2548]: I0113 20:23:14.862098 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shq4s\" (UniqueName: \"kubernetes.io/projected/07f8825a-025e-4639-8e3a-2fb009a1fb0f-kube-api-access-shq4s\") pod \"coredns-6f6b679f8f-bq45j\" (UID: \"07f8825a-025e-4639-8e3a-2fb009a1fb0f\") " pod="kube-system/coredns-6f6b679f8f-bq45j" Jan 13 20:23:14.862220 kubelet[2548]: I0113 20:23:14.862118 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74b40401-ea9c-44c4-880f-d104e02a4c5d-config-volume\") pod \"coredns-6f6b679f8f-glgfz\" (UID: \"74b40401-ea9c-44c4-880f-d104e02a4c5d\") " pod="kube-system/coredns-6f6b679f8f-glgfz" Jan 13 20:23:14.862220 kubelet[2548]: I0113 20:23:14.862150 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc2sw\" (UniqueName: \"kubernetes.io/projected/74b40401-ea9c-44c4-880f-d104e02a4c5d-kube-api-access-rc2sw\") pod \"coredns-6f6b679f8f-glgfz\" (UID: \"74b40401-ea9c-44c4-880f-d104e02a4c5d\") " pod="kube-system/coredns-6f6b679f8f-glgfz" Jan 13 20:23:15.060541 kubelet[2548]: E0113 20:23:15.060409 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:15.061652 containerd[1472]: time="2025-01-13T20:23:15.061599824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bq45j,Uid:07f8825a-025e-4639-8e3a-2fb009a1fb0f,Namespace:kube-system,Attempt:0,}" Jan 13 20:23:15.165604 kubelet[2548]: E0113 20:23:15.165570 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:15.167232 containerd[1472]: time="2025-01-13T20:23:15.167198049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-glgfz,Uid:74b40401-ea9c-44c4-880f-d104e02a4c5d,Namespace:kube-system,Attempt:0,}" Jan 13 20:23:15.475118 kubelet[2548]: E0113 20:23:15.474983 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:15.490775 kubelet[2548]: I0113 20:23:15.490700 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-489pd" podStartSLOduration=6.24977258 podStartE2EDuration="23.49068191s" podCreationTimestamp="2025-01-13 20:22:52 +0000 UTC" firstStartedPulling="2025-01-13 20:22:53.085121632 +0000 UTC m=+7.787858438" lastFinishedPulling="2025-01-13 20:23:10.326030962 +0000 UTC m=+25.028767768" observedRunningTime="2025-01-13 20:23:15.490509223 +0000 UTC m=+30.193246029" watchObservedRunningTime="2025-01-13 20:23:15.49068191 +0000 UTC m=+30.193418676" Jan 13 20:23:16.476391 kubelet[2548]: E0113 20:23:16.476360 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:17.443588 systemd[1]: Started sshd@8-10.0.0.109:22-10.0.0.1:42454.service - OpenSSH per-connection server daemon (10.0.0.1:42454). Jan 13 20:23:17.477711 kubelet[2548]: E0113 20:23:17.477687 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:17.491514 sshd[3369]: Accepted publickey for core from 10.0.0.1 port 42454 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:17.492823 sshd-session[3369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:17.496968 systemd-logind[1453]: New session 9 of user core. Jan 13 20:23:17.506228 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:23:17.623427 sshd[3371]: Connection closed by 10.0.0.1 port 42454 Jan 13 20:23:17.623766 sshd-session[3369]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:17.626984 systemd[1]: sshd@8-10.0.0.109:22-10.0.0.1:42454.service: Deactivated successfully. Jan 13 20:23:17.628824 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:23:17.629634 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:23:17.630406 systemd-logind[1453]: Removed session 9. Jan 13 20:23:21.726615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1830270237.mount: Deactivated successfully. Jan 13 20:23:22.031138 containerd[1472]: time="2025-01-13T20:23:22.030468923Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:22.031138 containerd[1472]: time="2025-01-13T20:23:22.030929133Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137742" Jan 13 20:23:22.032339 containerd[1472]: time="2025-01-13T20:23:22.032309364Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:22.034520 containerd[1472]: time="2025-01-13T20:23:22.034483293Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 11.708171916s" Jan 13 20:23:22.034520 containerd[1472]: time="2025-01-13T20:23:22.034518734Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:23:22.036344 containerd[1472]: time="2025-01-13T20:23:22.036299334Z" level=info msg="CreateContainer within sandbox \"ca48b6d2bdac02473a1c977a9e95b2ffba5489969e09be7e26903c5ef1a17e8b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:23:22.047959 containerd[1472]: time="2025-01-13T20:23:22.047919715Z" level=info msg="CreateContainer within sandbox \"ca48b6d2bdac02473a1c977a9e95b2ffba5489969e09be7e26903c5ef1a17e8b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf\"" Jan 13 20:23:22.048657 containerd[1472]: time="2025-01-13T20:23:22.048625211Z" level=info msg="StartContainer for \"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf\"" Jan 13 20:23:22.074235 systemd[1]: Started cri-containerd-36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf.scope - libcontainer container 36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf. Jan 13 20:23:22.099361 containerd[1472]: time="2025-01-13T20:23:22.099245670Z" level=info msg="StartContainer for \"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf\" returns successfully" Jan 13 20:23:22.494487 kubelet[2548]: E0113 20:23:22.493662 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:22.651451 systemd[1]: Started sshd@9-10.0.0.109:22-10.0.0.1:45766.service - OpenSSH per-connection server daemon (10.0.0.1:45766). Jan 13 20:23:22.711484 sshd[3435]: Accepted publickey for core from 10.0.0.1 port 45766 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:22.713342 sshd-session[3435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:22.723293 systemd-logind[1453]: New session 10 of user core. Jan 13 20:23:22.728308 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:23:22.882655 sshd[3437]: Connection closed by 10.0.0.1 port 45766 Jan 13 20:23:22.883404 sshd-session[3435]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:22.886728 systemd[1]: sshd@9-10.0.0.109:22-10.0.0.1:45766.service: Deactivated successfully. Jan 13 20:23:22.889858 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:23:22.890966 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:23:22.893527 systemd-logind[1453]: Removed session 10. Jan 13 20:23:23.495525 kubelet[2548]: E0113 20:23:23.495493 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:24.833712 kubelet[2548]: E0113 20:23:24.833656 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:25.773153 systemd-networkd[1403]: cilium_host: Link UP Jan 13 20:23:25.773286 systemd-networkd[1403]: cilium_net: Link UP Jan 13 20:23:25.773413 systemd-networkd[1403]: cilium_net: Gained carrier Jan 13 20:23:25.773534 systemd-networkd[1403]: cilium_host: Gained carrier Jan 13 20:23:25.878389 systemd-networkd[1403]: cilium_vxlan: Link UP Jan 13 20:23:25.878629 systemd-networkd[1403]: cilium_vxlan: Gained carrier Jan 13 20:23:25.945432 systemd-networkd[1403]: cilium_net: Gained IPv6LL Jan 13 20:23:26.185450 systemd-networkd[1403]: cilium_host: Gained IPv6LL Jan 13 20:23:26.207091 kernel: NET: Registered PF_ALG protocol family Jan 13 20:23:26.779195 systemd-networkd[1403]: lxc_health: Link UP Jan 13 20:23:26.790188 systemd-networkd[1403]: lxc_health: Gained carrier Jan 13 20:23:27.026729 kubelet[2548]: E0113 20:23:27.026697 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:27.063366 kubelet[2548]: I0113 20:23:27.063301 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-cjp22" podStartSLOduration=6.189765152 podStartE2EDuration="35.063285566s" podCreationTimestamp="2025-01-13 20:22:52 +0000 UTC" firstStartedPulling="2025-01-13 20:22:53.161651855 +0000 UTC m=+7.864388661" lastFinishedPulling="2025-01-13 20:23:22.035172269 +0000 UTC m=+36.737909075" observedRunningTime="2025-01-13 20:23:22.502367215 +0000 UTC m=+37.205104021" watchObservedRunningTime="2025-01-13 20:23:27.063285566 +0000 UTC m=+41.766022372" Jan 13 20:23:27.121188 systemd-networkd[1403]: cilium_vxlan: Gained IPv6LL Jan 13 20:23:27.203491 systemd-networkd[1403]: lxcdb07cc30df71: Link UP Jan 13 20:23:27.213304 kernel: eth0: renamed from tmpd3a6c Jan 13 20:23:27.231984 systemd-networkd[1403]: lxcdb07cc30df71: Gained carrier Jan 13 20:23:27.232620 systemd-networkd[1403]: lxca62b31b14a2d: Link UP Jan 13 20:23:27.234093 kernel: eth0: renamed from tmp41853 Jan 13 20:23:27.240753 systemd-networkd[1403]: lxca62b31b14a2d: Gained carrier Jan 13 20:23:27.501909 kubelet[2548]: E0113 20:23:27.501796 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:27.901234 systemd[1]: Started sshd@10-10.0.0.109:22-10.0.0.1:45780.service - OpenSSH per-connection server daemon (10.0.0.1:45780). Jan 13 20:23:27.950636 sshd[3830]: Accepted publickey for core from 10.0.0.1 port 45780 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:27.952044 sshd-session[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:27.960365 systemd-logind[1453]: New session 11 of user core. Jan 13 20:23:27.967674 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:23:28.095580 sshd[3832]: Connection closed by 10.0.0.1 port 45780 Jan 13 20:23:28.096128 sshd-session[3830]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:28.100711 systemd[1]: sshd@10-10.0.0.109:22-10.0.0.1:45780.service: Deactivated successfully. Jan 13 20:23:28.102796 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:23:28.103668 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:23:28.105771 systemd-logind[1453]: Removed session 11. Jan 13 20:23:28.593203 systemd-networkd[1403]: lxc_health: Gained IPv6LL Jan 13 20:23:29.041198 systemd-networkd[1403]: lxcdb07cc30df71: Gained IPv6LL Jan 13 20:23:29.233227 systemd-networkd[1403]: lxca62b31b14a2d: Gained IPv6LL Jan 13 20:23:30.750959 containerd[1472]: time="2025-01-13T20:23:30.750860115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:23:30.750959 containerd[1472]: time="2025-01-13T20:23:30.750921237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:23:30.750959 containerd[1472]: time="2025-01-13T20:23:30.750936157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:30.751359 containerd[1472]: time="2025-01-13T20:23:30.751011998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:30.754640 containerd[1472]: time="2025-01-13T20:23:30.751729811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:23:30.754640 containerd[1472]: time="2025-01-13T20:23:30.754337738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:23:30.754640 containerd[1472]: time="2025-01-13T20:23:30.754352379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:30.754640 containerd[1472]: time="2025-01-13T20:23:30.754433260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:30.781244 systemd[1]: Started cri-containerd-418532699cc947b9379a94e4c8e73c91a478794fe33eb0f7bbaad18dee7d9ce7.scope - libcontainer container 418532699cc947b9379a94e4c8e73c91a478794fe33eb0f7bbaad18dee7d9ce7. Jan 13 20:23:30.784455 systemd[1]: Started cri-containerd-d3a6c60e6b8f96642c2d450b4abb38bc5f526ad9dab3a4297d97ac726d564476.scope - libcontainer container d3a6c60e6b8f96642c2d450b4abb38bc5f526ad9dab3a4297d97ac726d564476. Jan 13 20:23:30.795536 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:23:30.797915 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:23:30.816639 containerd[1472]: time="2025-01-13T20:23:30.816600786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bq45j,Uid:07f8825a-025e-4639-8e3a-2fb009a1fb0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3a6c60e6b8f96642c2d450b4abb38bc5f526ad9dab3a4297d97ac726d564476\"" Jan 13 20:23:30.817737 kubelet[2548]: E0113 20:23:30.817577 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:30.819785 containerd[1472]: time="2025-01-13T20:23:30.819506758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-glgfz,Uid:74b40401-ea9c-44c4-880f-d104e02a4c5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"418532699cc947b9379a94e4c8e73c91a478794fe33eb0f7bbaad18dee7d9ce7\"" Jan 13 20:23:30.820474 containerd[1472]: time="2025-01-13T20:23:30.820005167Z" level=info msg="CreateContainer within sandbox \"d3a6c60e6b8f96642c2d450b4abb38bc5f526ad9dab3a4297d97ac726d564476\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:23:30.820597 kubelet[2548]: E0113 20:23:30.820335 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:30.822929 containerd[1472]: time="2025-01-13T20:23:30.822896380Z" level=info msg="CreateContainer within sandbox \"418532699cc947b9379a94e4c8e73c91a478794fe33eb0f7bbaad18dee7d9ce7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:23:30.841155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2423324015.mount: Deactivated successfully. Jan 13 20:23:30.844335 containerd[1472]: time="2025-01-13T20:23:30.844262127Z" level=info msg="CreateContainer within sandbox \"d3a6c60e6b8f96642c2d450b4abb38bc5f526ad9dab3a4297d97ac726d564476\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1364823c9f169a5bca819ef9722245e96128b8dd4eb79d0f80b6c7e616ef12e3\"" Jan 13 20:23:30.844906 containerd[1472]: time="2025-01-13T20:23:30.844867458Z" level=info msg="StartContainer for \"1364823c9f169a5bca819ef9722245e96128b8dd4eb79d0f80b6c7e616ef12e3\"" Jan 13 20:23:30.845366 containerd[1472]: time="2025-01-13T20:23:30.845324506Z" level=info msg="CreateContainer within sandbox \"418532699cc947b9379a94e4c8e73c91a478794fe33eb0f7bbaad18dee7d9ce7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"504b187efd7e26d9d2e7d1ef06e4ae1217d40f88221018044bc1259442dbcb5f\"" Jan 13 20:23:30.847703 containerd[1472]: time="2025-01-13T20:23:30.846603409Z" level=info msg="StartContainer for \"504b187efd7e26d9d2e7d1ef06e4ae1217d40f88221018044bc1259442dbcb5f\"" Jan 13 20:23:30.870220 systemd[1]: Started cri-containerd-1364823c9f169a5bca819ef9722245e96128b8dd4eb79d0f80b6c7e616ef12e3.scope - libcontainer container 1364823c9f169a5bca819ef9722245e96128b8dd4eb79d0f80b6c7e616ef12e3. Jan 13 20:23:30.872807 systemd[1]: Started cri-containerd-504b187efd7e26d9d2e7d1ef06e4ae1217d40f88221018044bc1259442dbcb5f.scope - libcontainer container 504b187efd7e26d9d2e7d1ef06e4ae1217d40f88221018044bc1259442dbcb5f. Jan 13 20:23:30.900866 containerd[1472]: time="2025-01-13T20:23:30.900821671Z" level=info msg="StartContainer for \"1364823c9f169a5bca819ef9722245e96128b8dd4eb79d0f80b6c7e616ef12e3\" returns successfully" Jan 13 20:23:30.910750 containerd[1472]: time="2025-01-13T20:23:30.910710970Z" level=info msg="StartContainer for \"504b187efd7e26d9d2e7d1ef06e4ae1217d40f88221018044bc1259442dbcb5f\" returns successfully" Jan 13 20:23:31.512224 kubelet[2548]: E0113 20:23:31.512130 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:31.517800 kubelet[2548]: E0113 20:23:31.517464 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:31.529498 kubelet[2548]: I0113 20:23:31.529387 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bq45j" podStartSLOduration=39.529370561 podStartE2EDuration="39.529370561s" podCreationTimestamp="2025-01-13 20:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:23:31.524373673 +0000 UTC m=+46.227110479" watchObservedRunningTime="2025-01-13 20:23:31.529370561 +0000 UTC m=+46.232107367" Jan 13 20:23:31.551057 kubelet[2548]: I0113 20:23:31.550983 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-glgfz" podStartSLOduration=39.550964182 podStartE2EDuration="39.550964182s" podCreationTimestamp="2025-01-13 20:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:23:31.549527036 +0000 UTC m=+46.252263842" watchObservedRunningTime="2025-01-13 20:23:31.550964182 +0000 UTC m=+46.253700988" Jan 13 20:23:32.518965 kubelet[2548]: E0113 20:23:32.518891 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:32.518965 kubelet[2548]: E0113 20:23:32.518944 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:33.106658 systemd[1]: Started sshd@11-10.0.0.109:22-10.0.0.1:46180.service - OpenSSH per-connection server daemon (10.0.0.1:46180). Jan 13 20:23:33.155106 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 46180 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:33.156612 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:33.161144 systemd-logind[1453]: New session 12 of user core. Jan 13 20:23:33.172247 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:23:33.295890 sshd[4021]: Connection closed by 10.0.0.1 port 46180 Jan 13 20:23:33.296256 sshd-session[4019]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:33.306675 systemd[1]: sshd@11-10.0.0.109:22-10.0.0.1:46180.service: Deactivated successfully. Jan 13 20:23:33.308153 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:23:33.311327 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:23:33.312548 systemd[1]: Started sshd@12-10.0.0.109:22-10.0.0.1:46190.service - OpenSSH per-connection server daemon (10.0.0.1:46190). Jan 13 20:23:33.313484 systemd-logind[1453]: Removed session 12. Jan 13 20:23:33.359083 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 46190 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:33.359808 sshd-session[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:33.363939 systemd-logind[1453]: New session 13 of user core. Jan 13 20:23:33.370288 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:23:33.515570 sshd[4037]: Connection closed by 10.0.0.1 port 46190 Jan 13 20:23:33.516048 sshd-session[4035]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:33.524953 kubelet[2548]: E0113 20:23:33.522733 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:33.523482 systemd[1]: sshd@12-10.0.0.109:22-10.0.0.1:46190.service: Deactivated successfully. Jan 13 20:23:33.527149 kubelet[2548]: E0113 20:23:33.527113 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:33.528034 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:23:33.530785 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:23:33.539609 systemd[1]: Started sshd@13-10.0.0.109:22-10.0.0.1:46192.service - OpenSSH per-connection server daemon (10.0.0.1:46192). Jan 13 20:23:33.541613 systemd-logind[1453]: Removed session 13. Jan 13 20:23:33.583535 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 46192 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:33.584759 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:33.588548 systemd-logind[1453]: New session 14 of user core. Jan 13 20:23:33.599247 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:23:33.741427 sshd[4049]: Connection closed by 10.0.0.1 port 46192 Jan 13 20:23:33.743702 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:33.751938 systemd[1]: sshd@13-10.0.0.109:22-10.0.0.1:46192.service: Deactivated successfully. Jan 13 20:23:33.755757 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:23:33.757957 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:23:33.758977 systemd-logind[1453]: Removed session 14. Jan 13 20:23:38.756239 systemd[1]: Started sshd@14-10.0.0.109:22-10.0.0.1:46194.service - OpenSSH per-connection server daemon (10.0.0.1:46194). Jan 13 20:23:38.807846 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 46194 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:38.809453 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:38.813601 systemd-logind[1453]: New session 15 of user core. Jan 13 20:23:38.822256 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:23:38.945462 sshd[4065]: Connection closed by 10.0.0.1 port 46194 Jan 13 20:23:38.947452 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:38.950820 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:23:38.951051 systemd[1]: sshd@14-10.0.0.109:22-10.0.0.1:46194.service: Deactivated successfully. Jan 13 20:23:38.953363 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:23:38.954289 systemd-logind[1453]: Removed session 15. Jan 13 20:23:43.956980 systemd[1]: Started sshd@15-10.0.0.109:22-10.0.0.1:33874.service - OpenSSH per-connection server daemon (10.0.0.1:33874). Jan 13 20:23:44.001791 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 33874 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:44.003082 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:44.007117 systemd-logind[1453]: New session 16 of user core. Jan 13 20:23:44.018249 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:23:44.151092 sshd[4082]: Connection closed by 10.0.0.1 port 33874 Jan 13 20:23:44.151812 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:44.162753 systemd[1]: sshd@15-10.0.0.109:22-10.0.0.1:33874.service: Deactivated successfully. Jan 13 20:23:44.164563 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:23:44.166909 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:23:44.179632 systemd[1]: Started sshd@16-10.0.0.109:22-10.0.0.1:33884.service - OpenSSH per-connection server daemon (10.0.0.1:33884). Jan 13 20:23:44.180586 systemd-logind[1453]: Removed session 16. Jan 13 20:23:44.219304 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 33884 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:44.220390 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:44.224132 systemd-logind[1453]: New session 17 of user core. Jan 13 20:23:44.239248 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:23:44.433647 sshd[4097]: Connection closed by 10.0.0.1 port 33884 Jan 13 20:23:44.434113 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:44.444481 systemd[1]: sshd@16-10.0.0.109:22-10.0.0.1:33884.service: Deactivated successfully. Jan 13 20:23:44.445966 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:23:44.448447 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:23:44.450289 systemd[1]: Started sshd@17-10.0.0.109:22-10.0.0.1:33890.service - OpenSSH per-connection server daemon (10.0.0.1:33890). Jan 13 20:23:44.451476 systemd-logind[1453]: Removed session 17. Jan 13 20:23:44.498469 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 33890 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:44.499510 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:44.503455 systemd-logind[1453]: New session 18 of user core. Jan 13 20:23:44.517320 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:23:45.814486 sshd[4110]: Connection closed by 10.0.0.1 port 33890 Jan 13 20:23:45.814820 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:45.821303 systemd[1]: sshd@17-10.0.0.109:22-10.0.0.1:33890.service: Deactivated successfully. Jan 13 20:23:45.824453 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:23:45.826557 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:23:45.833564 systemd[1]: Started sshd@18-10.0.0.109:22-10.0.0.1:33892.service - OpenSSH per-connection server daemon (10.0.0.1:33892). Jan 13 20:23:45.836404 systemd-logind[1453]: Removed session 18. Jan 13 20:23:45.881918 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 33892 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:45.883751 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:45.888137 systemd-logind[1453]: New session 19 of user core. Jan 13 20:23:45.898255 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:23:46.126655 sshd[4135]: Connection closed by 10.0.0.1 port 33892 Jan 13 20:23:46.132362 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:46.137703 systemd[1]: sshd@18-10.0.0.109:22-10.0.0.1:33892.service: Deactivated successfully. Jan 13 20:23:46.143778 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:23:46.148733 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:23:46.163972 systemd[1]: Started sshd@19-10.0.0.109:22-10.0.0.1:33904.service - OpenSSH per-connection server daemon (10.0.0.1:33904). Jan 13 20:23:46.165426 systemd-logind[1453]: Removed session 19. Jan 13 20:23:46.206886 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 33904 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:46.208709 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:46.212756 systemd-logind[1453]: New session 20 of user core. Jan 13 20:23:46.219219 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:23:46.333008 sshd[4148]: Connection closed by 10.0.0.1 port 33904 Jan 13 20:23:46.332877 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:46.336448 systemd[1]: sshd@19-10.0.0.109:22-10.0.0.1:33904.service: Deactivated successfully. Jan 13 20:23:46.338215 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:23:46.338792 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:23:46.339553 systemd-logind[1453]: Removed session 20. Jan 13 20:23:51.347507 systemd[1]: Started sshd@20-10.0.0.109:22-10.0.0.1:33920.service - OpenSSH per-connection server daemon (10.0.0.1:33920). Jan 13 20:23:51.396522 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 33920 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:51.397739 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:51.402222 systemd-logind[1453]: New session 21 of user core. Jan 13 20:23:51.409226 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:23:51.524888 sshd[4165]: Connection closed by 10.0.0.1 port 33920 Jan 13 20:23:51.525248 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:51.529341 systemd[1]: sshd@20-10.0.0.109:22-10.0.0.1:33920.service: Deactivated successfully. Jan 13 20:23:51.530880 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:23:51.531779 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:23:51.533790 systemd-logind[1453]: Removed session 21. Jan 13 20:23:56.541530 systemd[1]: Started sshd@21-10.0.0.109:22-10.0.0.1:38950.service - OpenSSH per-connection server daemon (10.0.0.1:38950). Jan 13 20:23:56.585561 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 38950 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:56.586662 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:56.590022 systemd-logind[1453]: New session 22 of user core. Jan 13 20:23:56.601235 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:23:56.707817 sshd[4183]: Connection closed by 10.0.0.1 port 38950 Jan 13 20:23:56.708304 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:56.711250 systemd[1]: sshd@21-10.0.0.109:22-10.0.0.1:38950.service: Deactivated successfully. Jan 13 20:23:56.712996 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:23:56.713706 systemd-logind[1453]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:23:56.714696 systemd-logind[1453]: Removed session 22. Jan 13 20:23:58.381888 kubelet[2548]: E0113 20:23:58.381847 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:01.718596 systemd[1]: Started sshd@22-10.0.0.109:22-10.0.0.1:38960.service - OpenSSH per-connection server daemon (10.0.0.1:38960). Jan 13 20:24:01.765019 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 38960 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:24:01.765526 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:01.769513 systemd-logind[1453]: New session 23 of user core. Jan 13 20:24:01.781262 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:24:01.892951 sshd[4198]: Connection closed by 10.0.0.1 port 38960 Jan 13 20:24:01.893292 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:01.905850 systemd[1]: sshd@22-10.0.0.109:22-10.0.0.1:38960.service: Deactivated successfully. Jan 13 20:24:01.907427 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:24:01.908698 systemd-logind[1453]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:24:01.909995 systemd[1]: Started sshd@23-10.0.0.109:22-10.0.0.1:38962.service - OpenSSH per-connection server daemon (10.0.0.1:38962). Jan 13 20:24:01.911850 systemd-logind[1453]: Removed session 23. Jan 13 20:24:01.955745 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 38962 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:24:01.957144 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:01.961206 systemd-logind[1453]: New session 24 of user core. Jan 13 20:24:01.975243 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:24:04.250510 containerd[1472]: time="2025-01-13T20:24:04.250141667Z" level=info msg="StopContainer for \"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf\" with timeout 30 (s)" Jan 13 20:24:04.250895 containerd[1472]: time="2025-01-13T20:24:04.250601751Z" level=info msg="Stop container \"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf\" with signal terminated" Jan 13 20:24:04.262924 systemd[1]: cri-containerd-36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf.scope: Deactivated successfully. Jan 13 20:24:04.289154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf-rootfs.mount: Deactivated successfully. Jan 13 20:24:04.294344 containerd[1472]: time="2025-01-13T20:24:04.294281345Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:24:04.297931 containerd[1472]: time="2025-01-13T20:24:04.297895094Z" level=info msg="StopContainer for \"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91\" with timeout 2 (s)" Jan 13 20:24:04.298396 containerd[1472]: time="2025-01-13T20:24:04.298370218Z" level=info msg="Stop container \"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91\" with signal terminated" Jan 13 20:24:04.300857 containerd[1472]: time="2025-01-13T20:24:04.300673357Z" level=info msg="shim disconnected" id=36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf namespace=k8s.io Jan 13 20:24:04.300857 containerd[1472]: time="2025-01-13T20:24:04.300717037Z" level=warning msg="cleaning up after shim disconnected" id=36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf namespace=k8s.io Jan 13 20:24:04.300857 containerd[1472]: time="2025-01-13T20:24:04.300725197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:04.306244 systemd-networkd[1403]: lxc_health: Link DOWN Jan 13 20:24:04.306254 systemd-networkd[1403]: lxc_health: Lost carrier Jan 13 20:24:04.334925 systemd[1]: cri-containerd-4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91.scope: Deactivated successfully. Jan 13 20:24:04.335405 systemd[1]: cri-containerd-4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91.scope: Consumed 6.480s CPU time. Jan 13 20:24:04.349808 containerd[1472]: time="2025-01-13T20:24:04.349765514Z" level=info msg="StopContainer for \"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf\" returns successfully" Jan 13 20:24:04.355089 containerd[1472]: time="2025-01-13T20:24:04.352783939Z" level=info msg="StopPodSandbox for \"ca48b6d2bdac02473a1c977a9e95b2ffba5489969e09be7e26903c5ef1a17e8b\"" Jan 13 20:24:04.355289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91-rootfs.mount: Deactivated successfully. Jan 13 20:24:04.358310 containerd[1472]: time="2025-01-13T20:24:04.358259423Z" level=info msg="Container to stop \"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:04.360574 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca48b6d2bdac02473a1c977a9e95b2ffba5489969e09be7e26903c5ef1a17e8b-shm.mount: Deactivated successfully. Jan 13 20:24:04.363389 containerd[1472]: time="2025-01-13T20:24:04.363197143Z" level=info msg="shim disconnected" id=4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91 namespace=k8s.io Jan 13 20:24:04.363389 containerd[1472]: time="2025-01-13T20:24:04.363252384Z" level=warning msg="cleaning up after shim disconnected" id=4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91 namespace=k8s.io Jan 13 20:24:04.363389 containerd[1472]: time="2025-01-13T20:24:04.363262064Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:04.365174 systemd[1]: cri-containerd-ca48b6d2bdac02473a1c977a9e95b2ffba5489969e09be7e26903c5ef1a17e8b.scope: Deactivated successfully. Jan 13 20:24:04.378939 containerd[1472]: time="2025-01-13T20:24:04.378896111Z" level=info msg="StopContainer for \"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91\" returns successfully" Jan 13 20:24:04.379612 containerd[1472]: time="2025-01-13T20:24:04.379581796Z" level=info msg="StopPodSandbox for \"e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab\"" Jan 13 20:24:04.379677 containerd[1472]: time="2025-01-13T20:24:04.379623836Z" level=info msg="Container to stop \"9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:04.379677 containerd[1472]: time="2025-01-13T20:24:04.379635676Z" level=info msg="Container to stop \"1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:04.379677 containerd[1472]: time="2025-01-13T20:24:04.379644557Z" level=info msg="Container to stop \"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:04.379677 containerd[1472]: time="2025-01-13T20:24:04.379653437Z" level=info msg="Container to stop \"5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:04.379677 containerd[1472]: time="2025-01-13T20:24:04.379661957Z" level=info msg="Container to stop \"1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:04.381294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab-shm.mount: Deactivated successfully. Jan 13 20:24:04.387189 systemd[1]: cri-containerd-e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab.scope: Deactivated successfully. Jan 13 20:24:04.401982 containerd[1472]: time="2025-01-13T20:24:04.401792616Z" level=info msg="shim disconnected" id=ca48b6d2bdac02473a1c977a9e95b2ffba5489969e09be7e26903c5ef1a17e8b namespace=k8s.io Jan 13 20:24:04.401982 containerd[1472]: time="2025-01-13T20:24:04.401842776Z" level=warning msg="cleaning up after shim disconnected" id=ca48b6d2bdac02473a1c977a9e95b2ffba5489969e09be7e26903c5ef1a17e8b namespace=k8s.io Jan 13 20:24:04.401982 containerd[1472]: time="2025-01-13T20:24:04.401851497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:04.407912 containerd[1472]: time="2025-01-13T20:24:04.407713944Z" level=info msg="shim disconnected" id=e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab namespace=k8s.io Jan 13 20:24:04.407912 containerd[1472]: time="2025-01-13T20:24:04.407771504Z" level=warning msg="cleaning up after shim disconnected" id=e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab namespace=k8s.io Jan 13 20:24:04.407912 containerd[1472]: time="2025-01-13T20:24:04.407779425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:04.423199 containerd[1472]: time="2025-01-13T20:24:04.423148549Z" level=info msg="TearDown network for sandbox \"ca48b6d2bdac02473a1c977a9e95b2ffba5489969e09be7e26903c5ef1a17e8b\" successfully" Jan 13 20:24:04.423199 containerd[1472]: time="2025-01-13T20:24:04.423186349Z" level=info msg="StopPodSandbox for \"ca48b6d2bdac02473a1c977a9e95b2ffba5489969e09be7e26903c5ef1a17e8b\" returns successfully" Jan 13 20:24:04.424669 containerd[1472]: time="2025-01-13T20:24:04.424506680Z" level=info msg="TearDown network for sandbox \"e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab\" successfully" Jan 13 20:24:04.424669 containerd[1472]: time="2025-01-13T20:24:04.424533160Z" level=info msg="StopPodSandbox for \"e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab\" returns successfully" Jan 13 20:24:04.475762 kubelet[2548]: I0113 20:24:04.475715 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cilium-config-path\") pod \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " Jan 13 20:24:04.475762 kubelet[2548]: I0113 20:24:04.475766 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-lib-modules\") pod \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " Jan 13 20:24:04.475762 kubelet[2548]: I0113 20:24:04.475787 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87c746d3-fb4c-4b75-921c-6c8140db1ae4-cilium-config-path\") pod \"87c746d3-fb4c-4b75-921c-6c8140db1ae4\" (UID: \"87c746d3-fb4c-4b75-921c-6c8140db1ae4\") " Jan 13 20:24:04.475762 kubelet[2548]: I0113 20:24:04.475803 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-host-proc-sys-net\") pod \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " Jan 13 20:24:04.475762 kubelet[2548]: I0113 20:24:04.475821 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhsf9\" (UniqueName: \"kubernetes.io/projected/87c746d3-fb4c-4b75-921c-6c8140db1ae4-kube-api-access-rhsf9\") pod \"87c746d3-fb4c-4b75-921c-6c8140db1ae4\" (UID: \"87c746d3-fb4c-4b75-921c-6c8140db1ae4\") " Jan 13 20:24:04.475762 kubelet[2548]: I0113 20:24:04.475835 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cilium-cgroup\") pod \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " Jan 13 20:24:04.476564 kubelet[2548]: I0113 20:24:04.475850 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-host-proc-sys-kernel\") pod \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " Jan 13 20:24:04.476564 kubelet[2548]: I0113 20:24:04.475865 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-xtables-lock\") pod \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " Jan 13 20:24:04.476564 kubelet[2548]: I0113 20:24:04.475887 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47c5d4fa-0c6d-44dc-af1d-c3953839d618-clustermesh-secrets\") pod \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " Jan 13 20:24:04.476564 kubelet[2548]: I0113 20:24:04.475900 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-hostproc\") pod \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " Jan 13 20:24:04.476564 kubelet[2548]: I0113 20:24:04.475917 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47c5d4fa-0c6d-44dc-af1d-c3953839d618-hubble-tls\") pod \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " Jan 13 20:24:04.476564 kubelet[2548]: I0113 20:24:04.475930 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-bpf-maps\") pod \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " Jan 13 20:24:04.476695 kubelet[2548]: I0113 20:24:04.475948 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-etc-cni-netd\") pod \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " Jan 13 20:24:04.476695 kubelet[2548]: I0113 20:24:04.475961 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cni-path\") pod \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " Jan 13 20:24:04.476695 kubelet[2548]: I0113 20:24:04.475975 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cilium-run\") pod \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " Jan 13 20:24:04.476695 kubelet[2548]: I0113 20:24:04.475992 2548 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7hlr\" (UniqueName: \"kubernetes.io/projected/47c5d4fa-0c6d-44dc-af1d-c3953839d618-kube-api-access-m7hlr\") pod \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\" (UID: \"47c5d4fa-0c6d-44dc-af1d-c3953839d618\") " Jan 13 20:24:04.481219 kubelet[2548]: I0113 20:24:04.480769 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "47c5d4fa-0c6d-44dc-af1d-c3953839d618" (UID: "47c5d4fa-0c6d-44dc-af1d-c3953839d618"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.481527 kubelet[2548]: I0113 20:24:04.481498 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "47c5d4fa-0c6d-44dc-af1d-c3953839d618" (UID: "47c5d4fa-0c6d-44dc-af1d-c3953839d618"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.481741 kubelet[2548]: I0113 20:24:04.481687 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "47c5d4fa-0c6d-44dc-af1d-c3953839d618" (UID: "47c5d4fa-0c6d-44dc-af1d-c3953839d618"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.481741 kubelet[2548]: I0113 20:24:04.481714 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "47c5d4fa-0c6d-44dc-af1d-c3953839d618" (UID: "47c5d4fa-0c6d-44dc-af1d-c3953839d618"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.481924 kubelet[2548]: I0113 20:24:04.481857 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "47c5d4fa-0c6d-44dc-af1d-c3953839d618" (UID: "47c5d4fa-0c6d-44dc-af1d-c3953839d618"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.481924 kubelet[2548]: I0113 20:24:04.481887 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-hostproc" (OuterVolumeSpecName: "hostproc") pod "47c5d4fa-0c6d-44dc-af1d-c3953839d618" (UID: "47c5d4fa-0c6d-44dc-af1d-c3953839d618"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.483465 kubelet[2548]: I0113 20:24:04.482138 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "47c5d4fa-0c6d-44dc-af1d-c3953839d618" (UID: "47c5d4fa-0c6d-44dc-af1d-c3953839d618"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.483465 kubelet[2548]: I0113 20:24:04.482171 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cni-path" (OuterVolumeSpecName: "cni-path") pod "47c5d4fa-0c6d-44dc-af1d-c3953839d618" (UID: "47c5d4fa-0c6d-44dc-af1d-c3953839d618"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.483465 kubelet[2548]: I0113 20:24:04.482192 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "47c5d4fa-0c6d-44dc-af1d-c3953839d618" (UID: "47c5d4fa-0c6d-44dc-af1d-c3953839d618"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.483465 kubelet[2548]: I0113 20:24:04.482459 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "47c5d4fa-0c6d-44dc-af1d-c3953839d618" (UID: "47c5d4fa-0c6d-44dc-af1d-c3953839d618"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:24:04.483465 kubelet[2548]: I0113 20:24:04.482490 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "47c5d4fa-0c6d-44dc-af1d-c3953839d618" (UID: "47c5d4fa-0c6d-44dc-af1d-c3953839d618"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.483645 kubelet[2548]: I0113 20:24:04.483337 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47c5d4fa-0c6d-44dc-af1d-c3953839d618-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "47c5d4fa-0c6d-44dc-af1d-c3953839d618" (UID: "47c5d4fa-0c6d-44dc-af1d-c3953839d618"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:24:04.483645 kubelet[2548]: I0113 20:24:04.483416 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c5d4fa-0c6d-44dc-af1d-c3953839d618-kube-api-access-m7hlr" (OuterVolumeSpecName: "kube-api-access-m7hlr") pod "47c5d4fa-0c6d-44dc-af1d-c3953839d618" (UID: "47c5d4fa-0c6d-44dc-af1d-c3953839d618"). InnerVolumeSpecName "kube-api-access-m7hlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:24:04.483819 kubelet[2548]: I0113 20:24:04.483790 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c746d3-fb4c-4b75-921c-6c8140db1ae4-kube-api-access-rhsf9" (OuterVolumeSpecName: "kube-api-access-rhsf9") pod "87c746d3-fb4c-4b75-921c-6c8140db1ae4" (UID: "87c746d3-fb4c-4b75-921c-6c8140db1ae4"). InnerVolumeSpecName "kube-api-access-rhsf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:24:04.483967 kubelet[2548]: I0113 20:24:04.483941 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87c746d3-fb4c-4b75-921c-6c8140db1ae4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "87c746d3-fb4c-4b75-921c-6c8140db1ae4" (UID: "87c746d3-fb4c-4b75-921c-6c8140db1ae4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:24:04.484762 kubelet[2548]: I0113 20:24:04.484727 2548 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c5d4fa-0c6d-44dc-af1d-c3953839d618-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "47c5d4fa-0c6d-44dc-af1d-c3953839d618" (UID: "47c5d4fa-0c6d-44dc-af1d-c3953839d618"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:24:04.576180 kubelet[2548]: I0113 20:24:04.576142 2548 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/47c5d4fa-0c6d-44dc-af1d-c3953839d618-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576180 kubelet[2548]: I0113 20:24:04.576174 2548 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576180 kubelet[2548]: I0113 20:24:04.576186 2548 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576180 kubelet[2548]: I0113 20:24:04.576194 2548 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576392 kubelet[2548]: I0113 20:24:04.576202 2548 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576392 kubelet[2548]: I0113 20:24:04.576210 2548 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-m7hlr\" (UniqueName: \"kubernetes.io/projected/47c5d4fa-0c6d-44dc-af1d-c3953839d618-kube-api-access-m7hlr\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576392 kubelet[2548]: I0113 20:24:04.576221 2548 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576392 kubelet[2548]: I0113 20:24:04.576228 2548 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576392 kubelet[2548]: I0113 20:24:04.576235 2548 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87c746d3-fb4c-4b75-921c-6c8140db1ae4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576392 kubelet[2548]: I0113 20:24:04.576244 2548 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576392 kubelet[2548]: I0113 20:24:04.576252 2548 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rhsf9\" (UniqueName: \"kubernetes.io/projected/87c746d3-fb4c-4b75-921c-6c8140db1ae4-kube-api-access-rhsf9\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576392 kubelet[2548]: I0113 20:24:04.576260 2548 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576582 kubelet[2548]: I0113 20:24:04.576268 2548 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576582 kubelet[2548]: I0113 20:24:04.576277 2548 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576582 kubelet[2548]: I0113 20:24:04.576285 2548 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47c5d4fa-0c6d-44dc-af1d-c3953839d618-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.576582 kubelet[2548]: I0113 20:24:04.576294 2548 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/47c5d4fa-0c6d-44dc-af1d-c3953839d618-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 13 20:24:04.586123 systemd[1]: Removed slice kubepods-besteffort-pod87c746d3_fb4c_4b75_921c_6c8140db1ae4.slice - libcontainer container kubepods-besteffort-pod87c746d3_fb4c_4b75_921c_6c8140db1ae4.slice. Jan 13 20:24:04.586669 kubelet[2548]: I0113 20:24:04.586178 2548 scope.go:117] "RemoveContainer" containerID="36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf" Jan 13 20:24:04.588030 systemd[1]: Removed slice kubepods-burstable-pod47c5d4fa_0c6d_44dc_af1d_c3953839d618.slice - libcontainer container kubepods-burstable-pod47c5d4fa_0c6d_44dc_af1d_c3953839d618.slice. Jan 13 20:24:04.588328 systemd[1]: kubepods-burstable-pod47c5d4fa_0c6d_44dc_af1d_c3953839d618.slice: Consumed 6.605s CPU time. Jan 13 20:24:04.588855 containerd[1472]: time="2025-01-13T20:24:04.588823731Z" level=info msg="RemoveContainer for \"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf\"" Jan 13 20:24:04.593384 containerd[1472]: time="2025-01-13T20:24:04.593187247Z" level=info msg="RemoveContainer for \"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf\" returns successfully" Jan 13 20:24:04.593545 kubelet[2548]: I0113 20:24:04.593435 2548 scope.go:117] "RemoveContainer" containerID="36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf" Jan 13 20:24:04.593957 containerd[1472]: time="2025-01-13T20:24:04.593901453Z" level=error msg="ContainerStatus for \"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf\": not found" Jan 13 20:24:04.597190 kubelet[2548]: E0113 20:24:04.597145 2548 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf\": not found" containerID="36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf" Jan 13 20:24:04.597292 kubelet[2548]: I0113 20:24:04.597199 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf"} err="failed to get container status \"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf\": rpc error: code = NotFound desc = an error occurred when try to find container \"36a02a7e42da97994ff91d2b97a3f16be89d49ab0a54c80e7af17c7ac3eebdbf\": not found" Jan 13 20:24:04.597320 kubelet[2548]: I0113 20:24:04.597294 2548 scope.go:117] "RemoveContainer" containerID="4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91" Jan 13 20:24:04.599551 containerd[1472]: time="2025-01-13T20:24:04.599516778Z" level=info msg="RemoveContainer for \"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91\"" Jan 13 20:24:04.603533 containerd[1472]: time="2025-01-13T20:24:04.603495570Z" level=info msg="RemoveContainer for \"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91\" returns successfully" Jan 13 20:24:04.603759 kubelet[2548]: I0113 20:24:04.603689 2548 scope.go:117] "RemoveContainer" containerID="1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f" Jan 13 20:24:04.604698 containerd[1472]: time="2025-01-13T20:24:04.604663100Z" level=info msg="RemoveContainer for \"1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f\"" Jan 13 20:24:04.609425 containerd[1472]: time="2025-01-13T20:24:04.609385938Z" level=info msg="RemoveContainer for \"1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f\" returns successfully" Jan 13 20:24:04.609724 kubelet[2548]: I0113 20:24:04.609577 2548 scope.go:117] "RemoveContainer" containerID="1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567" Jan 13 20:24:04.613193 containerd[1472]: time="2025-01-13T20:24:04.613042848Z" level=info msg="RemoveContainer for \"1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567\"" Jan 13 20:24:04.615560 containerd[1472]: time="2025-01-13T20:24:04.615466227Z" level=info msg="RemoveContainer for \"1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567\" returns successfully" Jan 13 20:24:04.615664 kubelet[2548]: I0113 20:24:04.615639 2548 scope.go:117] "RemoveContainer" containerID="9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b" Jan 13 20:24:04.616874 containerd[1472]: time="2025-01-13T20:24:04.616637877Z" level=info msg="RemoveContainer for \"9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b\"" Jan 13 20:24:04.618652 containerd[1472]: time="2025-01-13T20:24:04.618620973Z" level=info msg="RemoveContainer for \"9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b\" returns successfully" Jan 13 20:24:04.618924 kubelet[2548]: I0113 20:24:04.618899 2548 scope.go:117] "RemoveContainer" containerID="5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7" Jan 13 20:24:04.620024 containerd[1472]: time="2025-01-13T20:24:04.619996104Z" level=info msg="RemoveContainer for \"5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7\"" Jan 13 20:24:04.621975 containerd[1472]: time="2025-01-13T20:24:04.621942600Z" level=info msg="RemoveContainer for \"5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7\" returns successfully" Jan 13 20:24:04.622167 kubelet[2548]: I0113 20:24:04.622148 2548 scope.go:117] "RemoveContainer" containerID="4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91" Jan 13 20:24:04.622412 containerd[1472]: time="2025-01-13T20:24:04.622363723Z" level=error msg="ContainerStatus for \"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91\": not found" Jan 13 20:24:04.622532 kubelet[2548]: E0113 20:24:04.622502 2548 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91\": not found" containerID="4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91" Jan 13 20:24:04.622572 kubelet[2548]: I0113 20:24:04.622540 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91"} err="failed to get container status \"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e97251ee329e68a4434274d5a4cbdefb9d8a64f4e4da28f70175391d0015f91\": not found" Jan 13 20:24:04.622572 kubelet[2548]: I0113 20:24:04.622563 2548 scope.go:117] "RemoveContainer" containerID="1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f" Jan 13 20:24:04.622774 containerd[1472]: time="2025-01-13T20:24:04.622744726Z" level=error msg="ContainerStatus for \"1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f\": not found" Jan 13 20:24:04.622871 kubelet[2548]: E0113 20:24:04.622852 2548 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f\": not found" containerID="1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f" Jan 13 20:24:04.622904 kubelet[2548]: I0113 20:24:04.622874 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f"} err="failed to get container status \"1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f\": rpc error: code = NotFound desc = an error occurred when try to find container \"1edadb55d84329d5161887f997befe267c0ad40965bb747259c9e7511c44397f\": not found" Jan 13 20:24:04.622904 kubelet[2548]: I0113 20:24:04.622888 2548 scope.go:117] "RemoveContainer" containerID="1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567" Jan 13 20:24:04.623112 containerd[1472]: time="2025-01-13T20:24:04.623085409Z" level=error msg="ContainerStatus for \"1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567\": not found" Jan 13 20:24:04.623257 kubelet[2548]: E0113 20:24:04.623232 2548 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567\": not found" containerID="1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567" Jan 13 20:24:04.623332 kubelet[2548]: I0113 20:24:04.623295 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567"} err="failed to get container status \"1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567\": rpc error: code = NotFound desc = an error occurred when try to find container \"1250851cac6d02f8da830e782d64d7e89ffb5cf3fb7c802c6f1bdfee300d7567\": not found" Jan 13 20:24:04.623363 kubelet[2548]: I0113 20:24:04.623331 2548 scope.go:117] "RemoveContainer" containerID="9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b" Jan 13 20:24:04.623528 containerd[1472]: time="2025-01-13T20:24:04.623502412Z" level=error msg="ContainerStatus for \"9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b\": not found" Jan 13 20:24:04.623652 kubelet[2548]: E0113 20:24:04.623634 2548 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b\": not found" containerID="9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b" Jan 13 20:24:04.623686 kubelet[2548]: I0113 20:24:04.623656 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b"} err="failed to get container status \"9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f0a565d7c5f16b0d0d00ef454018479b920d6c00aed3d578061855c4423253b\": not found" Jan 13 20:24:04.623686 kubelet[2548]: I0113 20:24:04.623669 2548 scope.go:117] "RemoveContainer" containerID="5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7" Jan 13 20:24:04.623865 containerd[1472]: time="2025-01-13T20:24:04.623838815Z" level=error msg="ContainerStatus for \"5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7\": not found" Jan 13 20:24:04.623985 kubelet[2548]: E0113 20:24:04.623964 2548 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7\": not found" containerID="5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7" Jan 13 20:24:04.624016 kubelet[2548]: I0113 20:24:04.623991 2548 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7"} err="failed to get container status \"5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d0e48885a938edeb73ac12a41efbca39ab076391a998c61c4a44347c184dca7\": not found" Jan 13 20:24:05.274759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca48b6d2bdac02473a1c977a9e95b2ffba5489969e09be7e26903c5ef1a17e8b-rootfs.mount: Deactivated successfully. Jan 13 20:24:05.274864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e08fe7161f701b059cc596d65f2e57bcdb7895ad771415fa3f06f0951327d2ab-rootfs.mount: Deactivated successfully. Jan 13 20:24:05.274915 systemd[1]: var-lib-kubelet-pods-87c746d3\x2dfb4c\x2d4b75\x2d921c\x2d6c8140db1ae4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drhsf9.mount: Deactivated successfully. Jan 13 20:24:05.274968 systemd[1]: var-lib-kubelet-pods-47c5d4fa\x2d0c6d\x2d44dc\x2daf1d\x2dc3953839d618-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm7hlr.mount: Deactivated successfully. Jan 13 20:24:05.275018 systemd[1]: var-lib-kubelet-pods-47c5d4fa\x2d0c6d\x2d44dc\x2daf1d\x2dc3953839d618-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:24:05.275104 systemd[1]: var-lib-kubelet-pods-47c5d4fa\x2d0c6d\x2d44dc\x2daf1d\x2dc3953839d618-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:24:05.387694 kubelet[2548]: I0113 20:24:05.387626 2548 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c5d4fa-0c6d-44dc-af1d-c3953839d618" path="/var/lib/kubelet/pods/47c5d4fa-0c6d-44dc-af1d-c3953839d618/volumes" Jan 13 20:24:05.388303 kubelet[2548]: I0113 20:24:05.388273 2548 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87c746d3-fb4c-4b75-921c-6c8140db1ae4" path="/var/lib/kubelet/pods/87c746d3-fb4c-4b75-921c-6c8140db1ae4/volumes" Jan 13 20:24:05.426328 kubelet[2548]: E0113 20:24:05.426272 2548 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:24:06.216439 sshd[4213]: Connection closed by 10.0.0.1 port 38962 Jan 13 20:24:06.216983 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:06.228078 systemd[1]: sshd@23-10.0.0.109:22-10.0.0.1:38962.service: Deactivated successfully. Jan 13 20:24:06.229912 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:24:06.230113 systemd[1]: session-24.scope: Consumed 1.627s CPU time. Jan 13 20:24:06.231774 systemd-logind[1453]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:24:06.238420 systemd[1]: Started sshd@24-10.0.0.109:22-10.0.0.1:44418.service - OpenSSH per-connection server daemon (10.0.0.1:44418). Jan 13 20:24:06.239272 systemd-logind[1453]: Removed session 24. Jan 13 20:24:06.281856 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 44418 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:24:06.283340 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:06.288532 systemd-logind[1453]: New session 25 of user core. Jan 13 20:24:06.295244 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:24:06.382214 kubelet[2548]: E0113 20:24:06.382173 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:07.064764 sshd[4377]: Connection closed by 10.0.0.1 port 44418 Jan 13 20:24:07.065301 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:07.078039 systemd[1]: sshd@24-10.0.0.109:22-10.0.0.1:44418.service: Deactivated successfully. Jan 13 20:24:07.081651 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:24:07.087369 systemd-logind[1453]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:24:07.091746 kubelet[2548]: E0113 20:24:07.091677 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47c5d4fa-0c6d-44dc-af1d-c3953839d618" containerName="mount-cgroup" Jan 13 20:24:07.091746 kubelet[2548]: E0113 20:24:07.091709 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47c5d4fa-0c6d-44dc-af1d-c3953839d618" containerName="mount-bpf-fs" Jan 13 20:24:07.091746 kubelet[2548]: E0113 20:24:07.091716 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47c5d4fa-0c6d-44dc-af1d-c3953839d618" containerName="clean-cilium-state" Jan 13 20:24:07.091746 kubelet[2548]: E0113 20:24:07.091722 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47c5d4fa-0c6d-44dc-af1d-c3953839d618" containerName="cilium-agent" Jan 13 20:24:07.091746 kubelet[2548]: E0113 20:24:07.091727 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87c746d3-fb4c-4b75-921c-6c8140db1ae4" containerName="cilium-operator" Jan 13 20:24:07.091746 kubelet[2548]: E0113 20:24:07.091735 2548 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="47c5d4fa-0c6d-44dc-af1d-c3953839d618" containerName="apply-sysctl-overwrites" Jan 13 20:24:07.091746 kubelet[2548]: I0113 20:24:07.091761 2548 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c746d3-fb4c-4b75-921c-6c8140db1ae4" containerName="cilium-operator" Jan 13 20:24:07.091970 kubelet[2548]: I0113 20:24:07.091768 2548 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c5d4fa-0c6d-44dc-af1d-c3953839d618" containerName="cilium-agent" Jan 13 20:24:07.096415 systemd[1]: Started sshd@25-10.0.0.109:22-10.0.0.1:44432.service - OpenSSH per-connection server daemon (10.0.0.1:44432). Jan 13 20:24:07.098142 kubelet[2548]: W0113 20:24:07.098103 2548 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 13 20:24:07.098223 kubelet[2548]: E0113 20:24:07.098154 2548 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 13 20:24:07.098223 kubelet[2548]: W0113 20:24:07.098179 2548 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 13 20:24:07.098587 kubelet[2548]: E0113 20:24:07.098328 2548 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 13 20:24:07.098685 systemd-logind[1453]: Removed session 25. Jan 13 20:24:07.105734 systemd[1]: Created slice kubepods-burstable-pod3c6e9b87_393d_42ef_9e59_9275ec447d51.slice - libcontainer container kubepods-burstable-pod3c6e9b87_393d_42ef_9e59_9275ec447d51.slice. Jan 13 20:24:07.145482 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 44432 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:24:07.146765 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:07.150237 systemd-logind[1453]: New session 26 of user core. Jan 13 20:24:07.165213 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:24:07.189936 kubelet[2548]: I0113 20:24:07.189897 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c6e9b87-393d-42ef-9e59-9275ec447d51-cilium-run\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.189936 kubelet[2548]: I0113 20:24:07.189937 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c6e9b87-393d-42ef-9e59-9275ec447d51-clustermesh-secrets\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.190078 kubelet[2548]: I0113 20:24:07.189958 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c6e9b87-393d-42ef-9e59-9275ec447d51-cni-path\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.190078 kubelet[2548]: I0113 20:24:07.189972 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c6e9b87-393d-42ef-9e59-9275ec447d51-xtables-lock\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.190078 kubelet[2548]: I0113 20:24:07.189987 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c6e9b87-393d-42ef-9e59-9275ec447d51-cilium-ipsec-secrets\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.190078 kubelet[2548]: I0113 20:24:07.190001 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c6e9b87-393d-42ef-9e59-9275ec447d51-host-proc-sys-net\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.190078 kubelet[2548]: I0113 20:24:07.190016 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c6e9b87-393d-42ef-9e59-9275ec447d51-host-proc-sys-kernel\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.190078 kubelet[2548]: I0113 20:24:07.190029 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c6e9b87-393d-42ef-9e59-9275ec447d51-hubble-tls\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.190206 kubelet[2548]: I0113 20:24:07.190045 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c6e9b87-393d-42ef-9e59-9275ec447d51-cilium-cgroup\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.190206 kubelet[2548]: I0113 20:24:07.190074 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c6e9b87-393d-42ef-9e59-9275ec447d51-etc-cni-netd\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.190206 kubelet[2548]: I0113 20:24:07.190091 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x226r\" (UniqueName: \"kubernetes.io/projected/3c6e9b87-393d-42ef-9e59-9275ec447d51-kube-api-access-x226r\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.190206 kubelet[2548]: I0113 20:24:07.190109 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c6e9b87-393d-42ef-9e59-9275ec447d51-hostproc\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.190206 kubelet[2548]: I0113 20:24:07.190125 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c6e9b87-393d-42ef-9e59-9275ec447d51-lib-modules\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.190206 kubelet[2548]: I0113 20:24:07.190141 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c6e9b87-393d-42ef-9e59-9275ec447d51-cilium-config-path\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.190326 kubelet[2548]: I0113 20:24:07.190155 2548 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c6e9b87-393d-42ef-9e59-9275ec447d51-bpf-maps\") pod \"cilium-hd7kb\" (UID: \"3c6e9b87-393d-42ef-9e59-9275ec447d51\") " pod="kube-system/cilium-hd7kb" Jan 13 20:24:07.216468 sshd[4393]: Connection closed by 10.0.0.1 port 44432 Jan 13 20:24:07.217020 sshd-session[4391]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:07.232495 systemd[1]: sshd@25-10.0.0.109:22-10.0.0.1:44432.service: Deactivated successfully. Jan 13 20:24:07.233965 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:24:07.235808 systemd-logind[1453]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:24:07.249672 systemd[1]: Started sshd@26-10.0.0.109:22-10.0.0.1:44440.service - OpenSSH per-connection server daemon (10.0.0.1:44440). Jan 13 20:24:07.250829 systemd-logind[1453]: Removed session 26. Jan 13 20:24:07.289442 sshd[4399]: Accepted publickey for core from 10.0.0.1 port 44440 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:24:07.290627 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:07.301029 systemd-logind[1453]: New session 27 of user core. Jan 13 20:24:07.307227 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:24:07.341401 kubelet[2548]: I0113 20:24:07.341261 2548 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:24:07Z","lastTransitionTime":"2025-01-13T20:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:24:08.291625 kubelet[2548]: E0113 20:24:08.291569 2548 secret.go:188] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 13 20:24:08.291973 kubelet[2548]: E0113 20:24:08.291667 2548 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c6e9b87-393d-42ef-9e59-9275ec447d51-cilium-ipsec-secrets podName:3c6e9b87-393d-42ef-9e59-9275ec447d51 nodeName:}" failed. No retries permitted until 2025-01-13 20:24:08.791646492 +0000 UTC m=+83.494383298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/3c6e9b87-393d-42ef-9e59-9275ec447d51-cilium-ipsec-secrets") pod "cilium-hd7kb" (UID: "3c6e9b87-393d-42ef-9e59-9275ec447d51") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:24:08.911560 kubelet[2548]: E0113 20:24:08.911512 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:08.912232 containerd[1472]: time="2025-01-13T20:24:08.912186579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hd7kb,Uid:3c6e9b87-393d-42ef-9e59-9275ec447d51,Namespace:kube-system,Attempt:0,}" Jan 13 20:24:08.947029 containerd[1472]: time="2025-01-13T20:24:08.946924159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:08.947029 containerd[1472]: time="2025-01-13T20:24:08.946987640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:08.947029 containerd[1472]: time="2025-01-13T20:24:08.947003720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:08.947244 containerd[1472]: time="2025-01-13T20:24:08.947108480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:08.969279 systemd[1]: Started cri-containerd-56ac89ec4703939311bbc9ae1f94eab89354441e5aee9bcd6a2cc76383205c6c.scope - libcontainer container 56ac89ec4703939311bbc9ae1f94eab89354441e5aee9bcd6a2cc76383205c6c. Jan 13 20:24:09.001198 containerd[1472]: time="2025-01-13T20:24:09.001159885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hd7kb,Uid:3c6e9b87-393d-42ef-9e59-9275ec447d51,Namespace:kube-system,Attempt:0,} returns sandbox id \"56ac89ec4703939311bbc9ae1f94eab89354441e5aee9bcd6a2cc76383205c6c\"" Jan 13 20:24:09.002035 kubelet[2548]: E0113 20:24:09.002014 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:09.006001 containerd[1472]: time="2025-01-13T20:24:09.005967201Z" level=info msg="CreateContainer within sandbox \"56ac89ec4703939311bbc9ae1f94eab89354441e5aee9bcd6a2cc76383205c6c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:24:09.021374 containerd[1472]: time="2025-01-13T20:24:09.021320473Z" level=info msg="CreateContainer within sandbox \"56ac89ec4703939311bbc9ae1f94eab89354441e5aee9bcd6a2cc76383205c6c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f2953633fae89d1bfb51470fd3aa4120f1f8085f8a4f1af76add51f065c9d613\"" Jan 13 20:24:09.024968 containerd[1472]: time="2025-01-13T20:24:09.024908700Z" level=info msg="StartContainer for \"f2953633fae89d1bfb51470fd3aa4120f1f8085f8a4f1af76add51f065c9d613\"" Jan 13 20:24:09.047252 systemd[1]: Started cri-containerd-f2953633fae89d1bfb51470fd3aa4120f1f8085f8a4f1af76add51f065c9d613.scope - libcontainer container f2953633fae89d1bfb51470fd3aa4120f1f8085f8a4f1af76add51f065c9d613. Jan 13 20:24:09.076415 containerd[1472]: time="2025-01-13T20:24:09.076289757Z" level=info msg="StartContainer for \"f2953633fae89d1bfb51470fd3aa4120f1f8085f8a4f1af76add51f065c9d613\" returns successfully" Jan 13 20:24:09.094556 systemd[1]: cri-containerd-f2953633fae89d1bfb51470fd3aa4120f1f8085f8a4f1af76add51f065c9d613.scope: Deactivated successfully. Jan 13 20:24:09.124167 containerd[1472]: time="2025-01-13T20:24:09.124108189Z" level=info msg="shim disconnected" id=f2953633fae89d1bfb51470fd3aa4120f1f8085f8a4f1af76add51f065c9d613 namespace=k8s.io Jan 13 20:24:09.124515 containerd[1472]: time="2025-01-13T20:24:09.124419351Z" level=warning msg="cleaning up after shim disconnected" id=f2953633fae89d1bfb51470fd3aa4120f1f8085f8a4f1af76add51f065c9d613 namespace=k8s.io Jan 13 20:24:09.124515 containerd[1472]: time="2025-01-13T20:24:09.124436271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:09.596779 kubelet[2548]: E0113 20:24:09.596746 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:09.598483 containerd[1472]: time="2025-01-13T20:24:09.598435633Z" level=info msg="CreateContainer within sandbox \"56ac89ec4703939311bbc9ae1f94eab89354441e5aee9bcd6a2cc76383205c6c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:24:09.610929 containerd[1472]: time="2025-01-13T20:24:09.610812564Z" level=info msg="CreateContainer within sandbox \"56ac89ec4703939311bbc9ae1f94eab89354441e5aee9bcd6a2cc76383205c6c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9e6cef82ca17abea35e8f4ce9313046c5acb6568cc5abdd4487cf170d760c2b0\"" Jan 13 20:24:09.613152 containerd[1472]: time="2025-01-13T20:24:09.613095701Z" level=info msg="StartContainer for \"9e6cef82ca17abea35e8f4ce9313046c5acb6568cc5abdd4487cf170d760c2b0\"" Jan 13 20:24:09.661302 systemd[1]: Started cri-containerd-9e6cef82ca17abea35e8f4ce9313046c5acb6568cc5abdd4487cf170d760c2b0.scope - libcontainer container 9e6cef82ca17abea35e8f4ce9313046c5acb6568cc5abdd4487cf170d760c2b0. Jan 13 20:24:09.681516 containerd[1472]: time="2025-01-13T20:24:09.681460043Z" level=info msg="StartContainer for \"9e6cef82ca17abea35e8f4ce9313046c5acb6568cc5abdd4487cf170d760c2b0\" returns successfully" Jan 13 20:24:09.690951 systemd[1]: cri-containerd-9e6cef82ca17abea35e8f4ce9313046c5acb6568cc5abdd4487cf170d760c2b0.scope: Deactivated successfully. Jan 13 20:24:09.734806 containerd[1472]: time="2025-01-13T20:24:09.734730195Z" level=info msg="shim disconnected" id=9e6cef82ca17abea35e8f4ce9313046c5acb6568cc5abdd4487cf170d760c2b0 namespace=k8s.io Jan 13 20:24:09.734806 containerd[1472]: time="2025-01-13T20:24:09.734789635Z" level=warning msg="cleaning up after shim disconnected" id=9e6cef82ca17abea35e8f4ce9313046c5acb6568cc5abdd4487cf170d760c2b0 namespace=k8s.io Jan 13 20:24:09.734806 containerd[1472]: time="2025-01-13T20:24:09.734799915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:09.805986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2962268527.mount: Deactivated successfully. Jan 13 20:24:10.382649 kubelet[2548]: E0113 20:24:10.382609 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:10.427353 kubelet[2548]: E0113 20:24:10.427303 2548 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:24:10.603096 kubelet[2548]: E0113 20:24:10.600710 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:10.610087 containerd[1472]: time="2025-01-13T20:24:10.610041782Z" level=info msg="CreateContainer within sandbox \"56ac89ec4703939311bbc9ae1f94eab89354441e5aee9bcd6a2cc76383205c6c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:24:10.625695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount56006417.mount: Deactivated successfully. Jan 13 20:24:10.627564 containerd[1472]: time="2025-01-13T20:24:10.627525588Z" level=info msg="CreateContainer within sandbox \"56ac89ec4703939311bbc9ae1f94eab89354441e5aee9bcd6a2cc76383205c6c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"33f2d21043d4a829a2f6b78e935acda542405001f27ae798c75608dad18a04db\"" Jan 13 20:24:10.628190 containerd[1472]: time="2025-01-13T20:24:10.628149553Z" level=info msg="StartContainer for \"33f2d21043d4a829a2f6b78e935acda542405001f27ae798c75608dad18a04db\"" Jan 13 20:24:10.654229 systemd[1]: Started cri-containerd-33f2d21043d4a829a2f6b78e935acda542405001f27ae798c75608dad18a04db.scope - libcontainer container 33f2d21043d4a829a2f6b78e935acda542405001f27ae798c75608dad18a04db. Jan 13 20:24:10.679166 containerd[1472]: time="2025-01-13T20:24:10.679037759Z" level=info msg="StartContainer for \"33f2d21043d4a829a2f6b78e935acda542405001f27ae798c75608dad18a04db\" returns successfully" Jan 13 20:24:10.680233 systemd[1]: cri-containerd-33f2d21043d4a829a2f6b78e935acda542405001f27ae798c75608dad18a04db.scope: Deactivated successfully. Jan 13 20:24:10.705579 containerd[1472]: time="2025-01-13T20:24:10.705521430Z" level=info msg="shim disconnected" id=33f2d21043d4a829a2f6b78e935acda542405001f27ae798c75608dad18a04db namespace=k8s.io Jan 13 20:24:10.705579 containerd[1472]: time="2025-01-13T20:24:10.705574391Z" level=warning msg="cleaning up after shim disconnected" id=33f2d21043d4a829a2f6b78e935acda542405001f27ae798c75608dad18a04db namespace=k8s.io Jan 13 20:24:10.705759 containerd[1472]: time="2025-01-13T20:24:10.705582911Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:10.806533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33f2d21043d4a829a2f6b78e935acda542405001f27ae798c75608dad18a04db-rootfs.mount: Deactivated successfully. Jan 13 20:24:11.382383 kubelet[2548]: E0113 20:24:11.382343 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:11.603560 kubelet[2548]: E0113 20:24:11.603365 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:11.606673 containerd[1472]: time="2025-01-13T20:24:11.606133004Z" level=info msg="CreateContainer within sandbox \"56ac89ec4703939311bbc9ae1f94eab89354441e5aee9bcd6a2cc76383205c6c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:24:11.616256 containerd[1472]: time="2025-01-13T20:24:11.616211675Z" level=info msg="CreateContainer within sandbox \"56ac89ec4703939311bbc9ae1f94eab89354441e5aee9bcd6a2cc76383205c6c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b9aaf310049bf427309deb68f7933b712854cd85fb66c12200b594e026199f2c\"" Jan 13 20:24:11.618196 containerd[1472]: time="2025-01-13T20:24:11.617262923Z" level=info msg="StartContainer for \"b9aaf310049bf427309deb68f7933b712854cd85fb66c12200b594e026199f2c\"" Jan 13 20:24:11.646236 systemd[1]: Started cri-containerd-b9aaf310049bf427309deb68f7933b712854cd85fb66c12200b594e026199f2c.scope - libcontainer container b9aaf310049bf427309deb68f7933b712854cd85fb66c12200b594e026199f2c. Jan 13 20:24:11.665536 systemd[1]: cri-containerd-b9aaf310049bf427309deb68f7933b712854cd85fb66c12200b594e026199f2c.scope: Deactivated successfully. Jan 13 20:24:11.668543 containerd[1472]: time="2025-01-13T20:24:11.668504765Z" level=info msg="StartContainer for \"b9aaf310049bf427309deb68f7933b712854cd85fb66c12200b594e026199f2c\" returns successfully" Jan 13 20:24:11.691291 containerd[1472]: time="2025-01-13T20:24:11.691219926Z" level=info msg="shim disconnected" id=b9aaf310049bf427309deb68f7933b712854cd85fb66c12200b594e026199f2c namespace=k8s.io Jan 13 20:24:11.691291 containerd[1472]: time="2025-01-13T20:24:11.691281726Z" level=warning msg="cleaning up after shim disconnected" id=b9aaf310049bf427309deb68f7933b712854cd85fb66c12200b594e026199f2c namespace=k8s.io Jan 13 20:24:11.691291 containerd[1472]: time="2025-01-13T20:24:11.691293326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:11.806635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9aaf310049bf427309deb68f7933b712854cd85fb66c12200b594e026199f2c-rootfs.mount: Deactivated successfully. Jan 13 20:24:12.606539 kubelet[2548]: E0113 20:24:12.606443 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:12.611267 containerd[1472]: time="2025-01-13T20:24:12.611217678Z" level=info msg="CreateContainer within sandbox \"56ac89ec4703939311bbc9ae1f94eab89354441e5aee9bcd6a2cc76383205c6c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:24:12.679458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3401646066.mount: Deactivated successfully. Jan 13 20:24:12.685591 containerd[1472]: time="2025-01-13T20:24:12.685548115Z" level=info msg="CreateContainer within sandbox \"56ac89ec4703939311bbc9ae1f94eab89354441e5aee9bcd6a2cc76383205c6c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e68fac5d9aea3145ae6789f962bc94eb8357f6f118431bafb9625b8e26a963b\"" Jan 13 20:24:12.686078 containerd[1472]: time="2025-01-13T20:24:12.686040038Z" level=info msg="StartContainer for \"7e68fac5d9aea3145ae6789f962bc94eb8357f6f118431bafb9625b8e26a963b\"" Jan 13 20:24:12.714225 systemd[1]: Started cri-containerd-7e68fac5d9aea3145ae6789f962bc94eb8357f6f118431bafb9625b8e26a963b.scope - libcontainer container 7e68fac5d9aea3145ae6789f962bc94eb8357f6f118431bafb9625b8e26a963b. Jan 13 20:24:12.749499 containerd[1472]: time="2025-01-13T20:24:12.749448279Z" level=info msg="StartContainer for \"7e68fac5d9aea3145ae6789f962bc94eb8357f6f118431bafb9625b8e26a963b\" returns successfully" Jan 13 20:24:12.997117 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:24:13.611564 kubelet[2548]: E0113 20:24:13.611527 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:14.912913 kubelet[2548]: E0113 20:24:14.912880 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:15.866213 systemd-networkd[1403]: lxc_health: Link UP Jan 13 20:24:15.876821 systemd-networkd[1403]: lxc_health: Gained carrier Jan 13 20:24:16.914679 kubelet[2548]: E0113 20:24:16.914592 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:16.930333 kubelet[2548]: I0113 20:24:16.930260 2548 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hd7kb" podStartSLOduration=9.930245071 podStartE2EDuration="9.930245071s" podCreationTimestamp="2025-01-13 20:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:24:13.626634935 +0000 UTC m=+88.329371741" watchObservedRunningTime="2025-01-13 20:24:16.930245071 +0000 UTC m=+91.632981877" Jan 13 20:24:17.618352 kubelet[2548]: E0113 20:24:17.618309 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:17.809589 systemd-networkd[1403]: lxc_health: Gained IPv6LL Jan 13 20:24:17.948793 systemd[1]: run-containerd-runc-k8s.io-7e68fac5d9aea3145ae6789f962bc94eb8357f6f118431bafb9625b8e26a963b-runc.tTTU9G.mount: Deactivated successfully. Jan 13 20:24:18.619706 kubelet[2548]: E0113 20:24:18.619679 2548 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:20.119819 kubelet[2548]: E0113 20:24:20.119779 2548 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56150->127.0.0.1:46691: write tcp 127.0.0.1:56150->127.0.0.1:46691: write: broken pipe Jan 13 20:24:22.224508 sshd[4403]: Connection closed by 10.0.0.1 port 44440 Jan 13 20:24:22.224976 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:22.227572 systemd[1]: sshd@26-10.0.0.109:22-10.0.0.1:44440.service: Deactivated successfully. Jan 13 20:24:22.229781 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:24:22.231563 systemd-logind[1453]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:24:22.232465 systemd-logind[1453]: Removed session 27.