Jan 30 13:13:01.921616 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 13:13:01.921637 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 30 13:13:01.921648 kernel: KASLR enabled Jan 30 13:13:01.921654 kernel: efi: EFI v2.7 by EDK II Jan 30 13:13:01.921660 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jan 30 13:13:01.921666 kernel: random: crng init done Jan 30 13:13:01.921673 kernel: secureboot: Secure boot disabled Jan 30 13:13:01.921679 kernel: ACPI: Early table checksum verification disabled Jan 30 13:13:01.921685 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 30 13:13:01.921694 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:13:01.921700 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:01.921706 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:01.921712 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:01.921718 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:01.921726 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:01.921734 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:01.921740 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:01.921747 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:01.921753 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:01.921759 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 13:13:01.921766 kernel: NUMA: Failed to initialise from firmware Jan 30 13:13:01.921772 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:13:01.921779 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 30 13:13:01.921785 kernel: Zone ranges: Jan 30 13:13:01.921791 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:13:01.921802 kernel: DMA32 empty Jan 30 13:13:01.921811 kernel: Normal empty Jan 30 13:13:01.921818 kernel: Movable zone start for each node Jan 30 13:13:01.921825 kernel: Early memory node ranges Jan 30 13:13:01.921831 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jan 30 13:13:01.921837 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jan 30 13:13:01.921844 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jan 30 13:13:01.921850 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 13:13:01.921856 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 13:13:01.921862 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 13:13:01.921868 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 13:13:01.921875 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 13:13:01.921882 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 13:13:01.921889 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:13:01.921895 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 13:13:01.921904 kernel: psci: probing for conduit method from ACPI. Jan 30 13:13:01.921911 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 13:13:01.921918 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:13:01.921926 kernel: psci: Trusted OS migration not required Jan 30 13:13:01.921932 kernel: psci: SMC Calling Convention v1.1 Jan 30 13:13:01.921939 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 13:13:01.921946 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:13:01.921952 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:13:01.921959 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 13:13:01.921965 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:13:01.921972 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:13:01.921979 kernel: CPU features: detected: Hardware dirty bit management Jan 30 13:13:01.921985 kernel: CPU features: detected: Spectre-v4 Jan 30 13:13:01.921993 kernel: CPU features: detected: Spectre-BHB Jan 30 13:13:01.921999 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 13:13:01.922006 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 13:13:01.922013 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 13:13:01.922019 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 13:13:01.922025 kernel: alternatives: applying boot alternatives Jan 30 13:13:01.922033 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:13:01.922040 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:13:01.922047 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:13:01.922054 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:13:01.922060 kernel: Fallback order for Node 0: 0 Jan 30 13:13:01.922069 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 13:13:01.922075 kernel: Policy zone: DMA Jan 30 13:13:01.922082 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:13:01.922088 kernel: software IO TLB: area num 4. Jan 30 13:13:01.922095 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 13:13:01.922102 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Jan 30 13:13:01.922109 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:13:01.922116 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:13:01.922123 kernel: rcu: RCU event tracing is enabled. Jan 30 13:13:01.922130 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:13:01.922136 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:13:01.922143 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:13:01.922152 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:13:01.922158 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:13:01.922165 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:13:01.922171 kernel: GICv3: 256 SPIs implemented Jan 30 13:13:01.922178 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:13:01.922184 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:13:01.922191 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 13:13:01.922198 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 13:13:01.922204 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 13:13:01.922211 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 13:13:01.922217 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 13:13:01.922225 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 13:13:01.922232 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 13:13:01.922239 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:13:01.922245 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:13:01.922252 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 13:13:01.922259 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 13:13:01.922265 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 13:13:01.922272 kernel: arm-pv: using stolen time PV Jan 30 13:13:01.922279 kernel: Console: colour dummy device 80x25 Jan 30 13:13:01.922286 kernel: ACPI: Core revision 20230628 Jan 30 13:13:01.922293 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 13:13:01.922302 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:13:01.922308 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:13:01.922315 kernel: landlock: Up and running. Jan 30 13:13:01.922322 kernel: SELinux: Initializing. Jan 30 13:13:01.922329 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:13:01.922336 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:13:01.922360 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:13:01.922367 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:13:01.922374 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:13:01.922383 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:13:01.922390 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 13:13:01.922397 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 13:13:01.922404 kernel: Remapping and enabling EFI services. Jan 30 13:13:01.922410 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:13:01.922417 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:13:01.922424 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 13:13:01.922431 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 13:13:01.922438 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:13:01.922447 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 13:13:01.922454 kernel: Detected PIPT I-cache on CPU2 Jan 30 13:13:01.922467 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 13:13:01.922476 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 13:13:01.922483 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:13:01.922490 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 13:13:01.922497 kernel: Detected PIPT I-cache on CPU3 Jan 30 13:13:01.922505 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 13:13:01.922513 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 13:13:01.922522 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:13:01.922529 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 13:13:01.922536 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:13:01.922543 kernel: SMP: Total of 4 processors activated. Jan 30 13:13:01.922551 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:13:01.922558 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 13:13:01.922565 kernel: CPU features: detected: Common not Private translations Jan 30 13:13:01.922572 kernel: CPU features: detected: CRC32 instructions Jan 30 13:13:01.922583 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 13:13:01.922597 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 13:13:01.922604 kernel: CPU features: detected: LSE atomic instructions Jan 30 13:13:01.922612 kernel: CPU features: detected: Privileged Access Never Jan 30 13:13:01.922619 kernel: CPU features: detected: RAS Extension Support Jan 30 13:13:01.922626 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 13:13:01.922633 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:13:01.922641 kernel: alternatives: applying system-wide alternatives Jan 30 13:13:01.922648 kernel: devtmpfs: initialized Jan 30 13:13:01.922657 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:13:01.922665 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:13:01.922672 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:13:01.922679 kernel: SMBIOS 3.0.0 present. Jan 30 13:13:01.922686 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 30 13:13:01.922693 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:13:01.922701 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:13:01.922708 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:13:01.922716 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:13:01.922725 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:13:01.922732 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 30 13:13:01.922739 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:13:01.922746 kernel: cpuidle: using governor menu Jan 30 13:13:01.922754 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:13:01.922761 kernel: ASID allocator initialised with 32768 entries Jan 30 13:13:01.922768 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:13:01.922775 kernel: Serial: AMBA PL011 UART driver Jan 30 13:13:01.922782 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 13:13:01.922791 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 13:13:01.922798 kernel: Modules: 508880 pages in range for PLT usage Jan 30 13:13:01.922806 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:13:01.922813 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:13:01.922820 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:13:01.922827 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:13:01.922835 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:13:01.922842 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:13:01.922849 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:13:01.922858 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:13:01.922865 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:13:01.922872 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:13:01.922879 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:13:01.922886 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:13:01.922894 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:13:01.922901 kernel: ACPI: Interpreter enabled Jan 30 13:13:01.922908 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:13:01.922915 kernel: ACPI: MCFG table detected, 1 entries Jan 30 13:13:01.922923 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 13:13:01.922932 kernel: printk: console [ttyAMA0] enabled Jan 30 13:13:01.922939 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:13:01.923084 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:13:01.923165 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 13:13:01.923249 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 13:13:01.923319 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 13:13:01.923471 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 13:13:01.923487 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 13:13:01.923494 kernel: PCI host bridge to bus 0000:00 Jan 30 13:13:01.923574 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 13:13:01.923648 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 13:13:01.923714 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 13:13:01.923776 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:13:01.923862 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 13:13:01.923946 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:13:01.924016 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 13:13:01.924086 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 13:13:01.924155 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:13:01.924224 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:13:01.924294 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 13:13:01.924425 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 13:13:01.924497 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 13:13:01.924563 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 13:13:01.924634 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 13:13:01.924645 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 13:13:01.924652 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 13:13:01.924660 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 13:13:01.924667 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 13:13:01.924679 kernel: iommu: Default domain type: Translated Jan 30 13:13:01.924686 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:13:01.924694 kernel: efivars: Registered efivars operations Jan 30 13:13:01.924701 kernel: vgaarb: loaded Jan 30 13:13:01.924709 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:13:01.924716 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:13:01.924724 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:13:01.924731 kernel: pnp: PnP ACPI init Jan 30 13:13:01.924815 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 13:13:01.924828 kernel: pnp: PnP ACPI: found 1 devices Jan 30 13:13:01.924836 kernel: NET: Registered PF_INET protocol family Jan 30 13:13:01.924843 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:13:01.924850 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:13:01.924858 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:13:01.924865 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:13:01.924873 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:13:01.924880 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:13:01.924889 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:13:01.924897 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:13:01.924904 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:13:01.924912 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:13:01.924919 kernel: kvm [1]: HYP mode not available Jan 30 13:13:01.924926 kernel: Initialise system trusted keyrings Jan 30 13:13:01.924934 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:13:01.924941 kernel: Key type asymmetric registered Jan 30 13:13:01.924948 kernel: Asymmetric key parser 'x509' registered Jan 30 13:13:01.924957 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:13:01.924964 kernel: io scheduler mq-deadline registered Jan 30 13:13:01.924972 kernel: io scheduler kyber registered Jan 30 13:13:01.924979 kernel: io scheduler bfq registered Jan 30 13:13:01.924987 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 13:13:01.924994 kernel: ACPI: button: Power Button [PWRB] Jan 30 13:13:01.925002 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 13:13:01.925072 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 13:13:01.925082 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:13:01.925092 kernel: thunder_xcv, ver 1.0 Jan 30 13:13:01.925099 kernel: thunder_bgx, ver 1.0 Jan 30 13:13:01.925107 kernel: nicpf, ver 1.0 Jan 30 13:13:01.925114 kernel: nicvf, ver 1.0 Jan 30 13:13:01.925192 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:13:01.925260 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:13:01 UTC (1738242781) Jan 30 13:13:01.925270 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:13:01.925278 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 13:13:01.925285 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:13:01.925295 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:13:01.925302 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:13:01.925310 kernel: Segment Routing with IPv6 Jan 30 13:13:01.925317 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:13:01.925325 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:13:01.925332 kernel: Key type dns_resolver registered Jan 30 13:13:01.925358 kernel: registered taskstats version 1 Jan 30 13:13:01.925367 kernel: Loading compiled-in X.509 certificates Jan 30 13:13:01.925374 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 30 13:13:01.925384 kernel: Key type .fscrypt registered Jan 30 13:13:01.925391 kernel: Key type fscrypt-provisioning registered Jan 30 13:13:01.925399 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:13:01.925406 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:13:01.925413 kernel: ima: No architecture policies found Jan 30 13:13:01.925420 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:13:01.925428 kernel: clk: Disabling unused clocks Jan 30 13:13:01.925435 kernel: Freeing unused kernel memory: 39936K Jan 30 13:13:01.925444 kernel: Run /init as init process Jan 30 13:13:01.925451 kernel: with arguments: Jan 30 13:13:01.925459 kernel: /init Jan 30 13:13:01.925466 kernel: with environment: Jan 30 13:13:01.925474 kernel: HOME=/ Jan 30 13:13:01.925481 kernel: TERM=linux Jan 30 13:13:01.925488 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:13:01.925498 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:13:01.925510 systemd[1]: Detected virtualization kvm. Jan 30 13:13:01.925518 systemd[1]: Detected architecture arm64. Jan 30 13:13:01.925526 systemd[1]: Running in initrd. Jan 30 13:13:01.925534 systemd[1]: No hostname configured, using default hostname. Jan 30 13:13:01.925542 systemd[1]: Hostname set to . Jan 30 13:13:01.925550 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:13:01.925558 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:13:01.925568 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:13:01.925583 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:13:01.925599 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:13:01.925608 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:13:01.925618 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:13:01.925627 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:13:01.925637 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:13:01.925645 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:13:01.925662 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:13:01.925672 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:13:01.925680 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:13:01.925688 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:13:01.925696 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:13:01.925704 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:13:01.925712 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:13:01.925721 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:13:01.925729 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:13:01.925739 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:13:01.925747 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:13:01.925755 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:13:01.925763 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:13:01.925771 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:13:01.925779 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:13:01.925787 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:13:01.925795 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:13:01.925804 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:13:01.925812 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:13:01.925820 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:13:01.925828 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:13:01.925835 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:13:01.925843 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:13:01.925851 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:13:01.925862 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:13:01.925870 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:13:01.925878 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:13:01.925908 systemd-journald[239]: Collecting audit messages is disabled. Jan 30 13:13:01.925930 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:13:01.925938 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:13:01.925947 systemd-journald[239]: Journal started Jan 30 13:13:01.925970 systemd-journald[239]: Runtime Journal (/run/log/journal/a4a202864ed6462c86f1278c72d4cfc6) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:13:01.917432 systemd-modules-load[240]: Inserted module 'overlay' Jan 30 13:13:01.927532 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:13:01.933070 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:13:01.932151 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:13:01.935419 kernel: Bridge firewalling registered Jan 30 13:13:01.933700 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 30 13:13:01.934627 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:13:01.938551 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:13:01.942669 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:13:01.946740 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:13:01.948992 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:13:01.955541 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:13:01.956512 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:13:01.959532 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:13:01.966150 dracut-cmdline[276]: dracut-dracut-053 Jan 30 13:13:01.969315 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:13:01.987816 systemd-resolved[281]: Positive Trust Anchors: Jan 30 13:13:01.987829 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:13:01.987861 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:13:01.992673 systemd-resolved[281]: Defaulting to hostname 'linux'. Jan 30 13:13:01.995850 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:13:01.997011 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:13:02.063393 kernel: SCSI subsystem initialized Jan 30 13:13:02.068362 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:13:02.076384 kernel: iscsi: registered transport (tcp) Jan 30 13:13:02.089616 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:13:02.089641 kernel: QLogic iSCSI HBA Driver Jan 30 13:13:02.143805 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:13:02.159525 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:13:02.178045 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:13:02.178123 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:13:02.178135 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:13:02.227373 kernel: raid6: neonx8 gen() 15771 MB/s Jan 30 13:13:02.244357 kernel: raid6: neonx4 gen() 15802 MB/s Jan 30 13:13:02.261355 kernel: raid6: neonx2 gen() 13202 MB/s Jan 30 13:13:02.278354 kernel: raid6: neonx1 gen() 10486 MB/s Jan 30 13:13:02.295359 kernel: raid6: int64x8 gen() 6791 MB/s Jan 30 13:13:02.312357 kernel: raid6: int64x4 gen() 7352 MB/s Jan 30 13:13:02.329353 kernel: raid6: int64x2 gen() 6112 MB/s Jan 30 13:13:02.346353 kernel: raid6: int64x1 gen() 5058 MB/s Jan 30 13:13:02.346367 kernel: raid6: using algorithm neonx4 gen() 15802 MB/s Jan 30 13:13:02.363368 kernel: raid6: .... xor() 12348 MB/s, rmw enabled Jan 30 13:13:02.363393 kernel: raid6: using neon recovery algorithm Jan 30 13:13:02.368626 kernel: xor: measuring software checksum speed Jan 30 13:13:02.368642 kernel: 8regs : 20956 MB/sec Jan 30 13:13:02.368657 kernel: 32regs : 21704 MB/sec Jan 30 13:13:02.369621 kernel: arm64_neon : 27965 MB/sec Jan 30 13:13:02.369634 kernel: xor: using function: arm64_neon (27965 MB/sec) Jan 30 13:13:02.419364 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:13:02.430360 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:13:02.444516 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:13:02.455418 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 30 13:13:02.458544 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:13:02.460841 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:13:02.475872 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jan 30 13:13:02.502422 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:13:02.519492 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:13:02.559872 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:13:02.567607 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:13:02.580124 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:13:02.582132 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:13:02.584130 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:13:02.585845 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:13:02.593485 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:13:02.603771 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:13:02.611357 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 13:13:02.624073 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:13:02.624190 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:13:02.624208 kernel: GPT:9289727 != 19775487 Jan 30 13:13:02.624217 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:13:02.624226 kernel: GPT:9289727 != 19775487 Jan 30 13:13:02.624234 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:13:02.624243 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:13:02.618743 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:13:02.618854 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:13:02.622697 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:13:02.623552 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:13:02.623805 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:13:02.624724 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:13:02.634698 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:13:02.641370 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (515) Jan 30 13:13:02.643426 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (508) Jan 30 13:13:02.648734 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:13:02.650454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:13:02.660769 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:13:02.665085 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:13:02.668626 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:13:02.669513 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:13:02.680492 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:13:02.682046 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:13:02.689453 disk-uuid[553]: Primary Header is updated. Jan 30 13:13:02.689453 disk-uuid[553]: Secondary Entries is updated. Jan 30 13:13:02.689453 disk-uuid[553]: Secondary Header is updated. Jan 30 13:13:02.694363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:13:02.710758 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:13:03.710365 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:13:03.710465 disk-uuid[554]: The operation has completed successfully. Jan 30 13:13:03.730442 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:13:03.730545 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:13:03.754541 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:13:03.757667 sh[574]: Success Jan 30 13:13:03.772375 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:13:03.813841 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:13:03.815410 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:13:03.816147 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:13:03.831424 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 30 13:13:03.831468 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:13:03.831478 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:13:03.832809 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:13:03.833352 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:13:03.836872 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:13:03.838161 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:13:03.855502 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:13:03.856913 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:13:03.864634 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:13:03.864672 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:13:03.864690 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:13:03.867405 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:13:03.875204 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:13:03.876750 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:13:03.882572 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:13:03.889511 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:13:03.957927 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:13:03.970540 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:13:03.988166 ignition[663]: Ignition 2.20.0 Jan 30 13:13:03.988927 ignition[663]: Stage: fetch-offline Jan 30 13:13:03.989603 ignition[663]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:13:03.990266 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:13:03.991287 ignition[663]: parsed url from cmdline: "" Jan 30 13:13:03.990988 systemd-networkd[766]: lo: Link UP Jan 30 13:13:03.991291 ignition[663]: no config URL provided Jan 30 13:13:03.990992 systemd-networkd[766]: lo: Gained carrier Jan 30 13:13:03.991297 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:13:03.991824 systemd-networkd[766]: Enumeration completed Jan 30 13:13:03.991309 ignition[663]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:13:03.991955 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:13:03.991337 ignition[663]: op(1): [started] loading QEMU firmware config module Jan 30 13:13:03.992276 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:13:03.991354 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:13:03.992279 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:13:04.000996 ignition[663]: op(1): [finished] loading QEMU firmware config module Jan 30 13:13:03.993035 systemd[1]: Reached target network.target - Network. Jan 30 13:13:03.993104 systemd-networkd[766]: eth0: Link UP Jan 30 13:13:03.993108 systemd-networkd[766]: eth0: Gained carrier Jan 30 13:13:03.993115 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:13:04.013406 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:13:04.045048 ignition[663]: parsing config with SHA512: 233de51947a72489ac1836c56d5abc944a6f31ea3c68da09a32ee65694291f1d6af24b52ce79b7ce1a6d32aeb0427a87fa87201f330654ac39f9a32fe281efb3 Jan 30 13:13:04.049994 unknown[663]: fetched base config from "system" Jan 30 13:13:04.050003 unknown[663]: fetched user config from "qemu" Jan 30 13:13:04.050448 ignition[663]: fetch-offline: fetch-offline passed Jan 30 13:13:04.051725 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:13:04.050523 ignition[663]: Ignition finished successfully Jan 30 13:13:04.053649 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:13:04.068553 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:13:04.079823 ignition[773]: Ignition 2.20.0 Jan 30 13:13:04.079835 ignition[773]: Stage: kargs Jan 30 13:13:04.080002 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:13:04.080012 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:13:04.080952 ignition[773]: kargs: kargs passed Jan 30 13:13:04.084001 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:13:04.081003 ignition[773]: Ignition finished successfully Jan 30 13:13:04.096500 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:13:04.106830 ignition[781]: Ignition 2.20.0 Jan 30 13:13:04.106844 ignition[781]: Stage: disks Jan 30 13:13:04.107023 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:13:04.107033 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:13:04.109072 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:13:04.107994 ignition[781]: disks: disks passed Jan 30 13:13:04.110391 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:13:04.108041 ignition[781]: Ignition finished successfully Jan 30 13:13:04.111509 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:13:04.113109 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:13:04.114210 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:13:04.115706 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:13:04.127495 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:13:04.138669 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:13:04.142684 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:13:04.144614 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:13:04.196359 kernel: EXT4-fs (vda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 30 13:13:04.196776 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:13:04.197897 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:13:04.208442 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:13:04.210198 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:13:04.212422 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:13:04.212475 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:13:04.212499 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:13:04.219542 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800) Jan 30 13:13:04.216737 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:13:04.219562 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:13:04.224125 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:13:04.224152 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:13:04.224163 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:13:04.226370 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:13:04.228019 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:13:04.263337 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:13:04.267766 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:13:04.271550 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:13:04.275150 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:13:04.346540 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:13:04.351444 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:13:04.352784 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:13:04.358362 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:13:04.374835 ignition[913]: INFO : Ignition 2.20.0 Jan 30 13:13:04.374835 ignition[913]: INFO : Stage: mount Jan 30 13:13:04.376101 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:13:04.376101 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:13:04.376101 ignition[913]: INFO : mount: mount passed Jan 30 13:13:04.376101 ignition[913]: INFO : Ignition finished successfully Jan 30 13:13:04.377606 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:13:04.378735 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:13:04.386505 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:13:04.830430 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:13:04.840528 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:13:04.846728 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) Jan 30 13:13:04.846759 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:13:04.846770 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:13:04.847382 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:13:04.850359 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:13:04.850800 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:13:04.867593 ignition[942]: INFO : Ignition 2.20.0 Jan 30 13:13:04.867593 ignition[942]: INFO : Stage: files Jan 30 13:13:04.869232 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:13:04.869232 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:13:04.869232 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:13:04.872207 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:13:04.872207 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:13:04.872207 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:13:04.872207 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:13:04.872207 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:13:04.871973 unknown[942]: wrote ssh authorized keys file for user: core Jan 30 13:13:04.878761 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 30 13:13:04.878761 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 30 13:13:04.926467 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:13:05.110250 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 30 13:13:05.110250 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:13:05.113315 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 13:13:05.395926 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:13:05.454505 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:13:05.454505 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:13:05.457558 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:13:05.457558 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:13:05.457558 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:13:05.457558 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:13:05.457558 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:13:05.457558 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:13:05.457558 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:13:05.457558 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:13:05.457558 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:13:05.457558 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 13:13:05.457558 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 13:13:05.457558 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 13:13:05.457558 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 30 13:13:05.617299 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:13:05.676710 systemd-networkd[766]: eth0: Gained IPv6LL Jan 30 13:13:05.837477 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 13:13:05.837477 ignition[942]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:13:05.841403 ignition[942]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:13:05.841403 ignition[942]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:13:05.841403 ignition[942]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:13:05.841403 ignition[942]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 13:13:05.841403 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:13:05.841403 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:13:05.841403 ignition[942]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 13:13:05.841403 ignition[942]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:13:05.870268 ignition[942]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:13:05.874830 ignition[942]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:13:05.876171 ignition[942]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:13:05.876171 ignition[942]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:13:05.876171 ignition[942]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:13:05.876171 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:13:05.876171 ignition[942]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:13:05.876171 ignition[942]: INFO : files: files passed Jan 30 13:13:05.876171 ignition[942]: INFO : Ignition finished successfully Jan 30 13:13:05.878035 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:13:05.888638 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:13:05.890466 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:13:05.893056 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:13:05.893164 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:13:05.900271 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:13:05.903957 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:13:05.903957 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:13:05.906598 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:13:05.906826 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:13:05.909246 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:13:05.924553 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:13:05.946391 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:13:05.946501 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:13:05.948291 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:13:05.949681 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:13:05.951186 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:13:05.952053 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:13:05.969853 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:13:05.978500 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:13:05.987685 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:13:05.988683 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:13:05.990298 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:13:05.991723 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:13:05.991842 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:13:05.993910 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:13:05.995497 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:13:05.996944 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:13:05.998409 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:13:06.000022 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:13:06.001720 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:13:06.003287 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:13:06.004884 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:13:06.006538 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:13:06.007987 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:13:06.009175 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:13:06.009311 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:13:06.011213 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:13:06.012823 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:13:06.014427 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:13:06.015441 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:13:06.016872 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:13:06.017043 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:13:06.019247 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:13:06.019380 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:13:06.021174 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:13:06.022564 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:13:06.029416 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:13:06.030473 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:13:06.032246 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:13:06.033643 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:13:06.033755 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:13:06.035037 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:13:06.035124 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:13:06.036324 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:13:06.036447 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:13:06.037848 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:13:06.037942 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:13:06.054582 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:13:06.056875 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:13:06.057620 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:13:06.057750 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:13:06.059479 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:13:06.059583 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:13:06.065007 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:13:06.066378 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:13:06.073233 ignition[997]: INFO : Ignition 2.20.0 Jan 30 13:13:06.073233 ignition[997]: INFO : Stage: umount Jan 30 13:13:06.075606 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:13:06.075606 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:13:06.075606 ignition[997]: INFO : umount: umount passed Jan 30 13:13:06.075606 ignition[997]: INFO : Ignition finished successfully Jan 30 13:13:06.077582 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:13:06.078078 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:13:06.078166 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:13:06.079451 systemd[1]: Stopped target network.target - Network. Jan 30 13:13:06.080751 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:13:06.080807 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:13:06.082114 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:13:06.082153 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:13:06.090594 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:13:06.090662 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:13:06.091998 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:13:06.092039 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:13:06.093644 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:13:06.094906 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:13:06.096556 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:13:06.096656 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:13:06.098807 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:13:06.099026 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:13:06.106432 systemd-networkd[766]: eth0: DHCPv6 lease lost Jan 30 13:13:06.108304 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:13:06.108447 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:13:06.110049 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:13:06.110103 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:13:06.120821 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:13:06.121590 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:13:06.121656 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:13:06.123445 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:13:06.125238 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:13:06.125384 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:13:06.136965 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:13:06.137209 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:13:06.138825 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:13:06.138877 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:13:06.140272 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:13:06.140324 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:13:06.145458 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:13:06.145623 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:13:06.147466 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:13:06.147592 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:13:06.150138 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:13:06.150205 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:13:06.151734 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:13:06.151771 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:13:06.153253 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:13:06.153303 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:13:06.155717 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:13:06.155764 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:13:06.157833 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:13:06.157875 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:13:06.170564 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:13:06.171468 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:13:06.171531 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:13:06.173320 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:13:06.173374 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:13:06.175063 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:13:06.175100 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:13:06.176875 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:13:06.176922 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:13:06.178940 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:13:06.179450 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:13:06.181012 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:13:06.183023 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:13:06.196087 systemd[1]: Switching root. Jan 30 13:13:06.227613 systemd-journald[239]: Journal stopped Jan 30 13:13:06.958442 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 30 13:13:06.958508 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:13:06.958524 kernel: SELinux: policy capability open_perms=1 Jan 30 13:13:06.958538 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:13:06.958549 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:13:06.958558 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:13:06.958576 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:13:06.958597 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:13:06.958610 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:13:06.958620 kernel: audit: type=1403 audit(1738242786.398:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:13:06.958631 systemd[1]: Successfully loaded SELinux policy in 34.149ms. Jan 30 13:13:06.958650 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.774ms. Jan 30 13:13:06.958662 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:13:06.958673 systemd[1]: Detected virtualization kvm. Jan 30 13:13:06.958684 systemd[1]: Detected architecture arm64. Jan 30 13:13:06.958695 systemd[1]: Detected first boot. Jan 30 13:13:06.958707 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:13:06.958718 zram_generator::config[1042]: No configuration found. Jan 30 13:13:06.958729 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:13:06.958742 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:13:06.958752 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:13:06.958766 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:13:06.958777 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:13:06.958787 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:13:06.958798 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:13:06.958809 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:13:06.958820 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:13:06.958831 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:13:06.958844 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:13:06.958854 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:13:06.958865 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:13:06.958876 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:13:06.958887 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:13:06.958898 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:13:06.958908 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:13:06.958919 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:13:06.958930 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 13:13:06.958942 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:13:06.958953 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:13:06.958963 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:13:06.958974 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:13:06.958988 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:13:06.959001 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:13:06.959012 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:13:06.959024 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:13:06.959035 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:13:06.959046 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:13:06.959057 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:13:06.959067 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:13:06.959078 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:13:06.959089 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:13:06.959100 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:13:06.959111 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:13:06.959121 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:13:06.959134 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:13:06.959145 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:13:06.959161 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:13:06.959174 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:13:06.959185 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:13:06.959196 systemd[1]: Reached target machines.target - Containers. Jan 30 13:13:06.959206 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:13:06.959218 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:13:06.959230 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:13:06.959242 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:13:06.959252 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:13:06.959263 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:13:06.959274 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:13:06.959284 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:13:06.959295 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:13:06.959306 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:13:06.959318 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:13:06.959329 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:13:06.959425 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:13:06.959440 kernel: fuse: init (API version 7.39) Jan 30 13:13:06.959451 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:13:06.959461 kernel: loop: module loaded Jan 30 13:13:06.959471 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:13:06.959482 kernel: ACPI: bus type drm_connector registered Jan 30 13:13:06.959493 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:13:06.959503 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:13:06.959516 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:13:06.959527 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:13:06.959538 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:13:06.959549 systemd[1]: Stopped verity-setup.service. Jan 30 13:13:06.959590 systemd-journald[1106]: Collecting audit messages is disabled. Jan 30 13:13:06.959614 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:13:06.959625 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:13:06.959639 systemd-journald[1106]: Journal started Jan 30 13:13:06.959664 systemd-journald[1106]: Runtime Journal (/run/log/journal/a4a202864ed6462c86f1278c72d4cfc6) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:13:06.768608 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:13:06.785437 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:13:06.785812 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:13:06.963295 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:13:06.962802 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:13:06.963750 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:13:06.964725 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:13:06.965696 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:13:06.966776 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:13:06.968088 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:13:06.968281 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:13:06.969844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:13:06.970016 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:13:06.971190 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:13:06.971332 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:13:06.972796 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:13:06.974805 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:13:06.974944 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:13:06.977657 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:13:06.977808 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:13:06.978854 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:13:06.978986 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:13:06.980332 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:13:06.981504 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:13:06.982836 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:13:06.995381 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:13:07.007479 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:13:07.009498 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:13:07.010425 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:13:07.010457 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:13:07.012502 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:13:07.014676 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:13:07.017532 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:13:07.018530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:13:07.019989 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:13:07.022551 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:13:07.023937 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:13:07.025616 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:13:07.026750 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:13:07.028521 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:13:07.034560 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:13:07.038727 systemd-journald[1106]: Time spent on flushing to /var/log/journal/a4a202864ed6462c86f1278c72d4cfc6 is 18.343ms for 862 entries. Jan 30 13:13:07.038727 systemd-journald[1106]: System Journal (/var/log/journal/a4a202864ed6462c86f1278c72d4cfc6) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:13:07.063610 systemd-journald[1106]: Received client request to flush runtime journal. Jan 30 13:13:07.063664 kernel: loop0: detected capacity change from 0 to 201592 Jan 30 13:13:07.039860 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:13:07.043158 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:13:07.044772 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:13:07.045869 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:13:07.048374 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:13:07.051665 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:13:07.055723 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:13:07.068715 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:13:07.071265 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:13:07.075361 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:13:07.075917 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:13:07.077632 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:13:07.087032 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jan 30 13:13:07.087053 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jan 30 13:13:07.092196 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:13:07.102847 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:13:07.104961 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:13:07.106713 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:13:07.109625 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:13:07.113522 kernel: loop1: detected capacity change from 0 to 116784 Jan 30 13:13:07.127095 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:13:07.137603 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:13:07.146371 kernel: loop2: detected capacity change from 0 to 113552 Jan 30 13:13:07.155893 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 30 13:13:07.155908 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 30 13:13:07.162101 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:13:07.187376 kernel: loop3: detected capacity change from 0 to 201592 Jan 30 13:13:07.197370 kernel: loop4: detected capacity change from 0 to 116784 Jan 30 13:13:07.202393 kernel: loop5: detected capacity change from 0 to 113552 Jan 30 13:13:07.206296 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:13:07.206752 (sd-merge)[1181]: Merged extensions into '/usr'. Jan 30 13:13:07.210940 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:13:07.210955 systemd[1]: Reloading... Jan 30 13:13:07.275374 zram_generator::config[1209]: No configuration found. Jan 30 13:13:07.339928 ldconfig[1148]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:13:07.366177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:13:07.402242 systemd[1]: Reloading finished in 190 ms. Jan 30 13:13:07.434005 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:13:07.438854 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:13:07.448540 systemd[1]: Starting ensure-sysext.service... Jan 30 13:13:07.450247 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:13:07.464228 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:13:07.464244 systemd[1]: Reloading... Jan 30 13:13:07.471013 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:13:07.471216 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:13:07.471861 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:13:07.472066 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 30 13:13:07.472109 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 30 13:13:07.474549 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:13:07.474560 systemd-tmpfiles[1243]: Skipping /boot Jan 30 13:13:07.482297 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:13:07.482315 systemd-tmpfiles[1243]: Skipping /boot Jan 30 13:13:07.513363 zram_generator::config[1270]: No configuration found. Jan 30 13:13:07.590846 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:13:07.627557 systemd[1]: Reloading finished in 163 ms. Jan 30 13:13:07.646406 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:13:07.658759 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:13:07.666320 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:13:07.668505 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:13:07.671655 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:13:07.674676 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:13:07.679654 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:13:07.688398 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:13:07.691473 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:13:07.692704 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:13:07.695850 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:13:07.705085 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:13:07.706023 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:13:07.707737 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:13:07.709300 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:13:07.712825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:13:07.712960 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:13:07.714253 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:13:07.714416 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:13:07.715874 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:13:07.716068 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:13:07.724056 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:13:07.729806 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:13:07.731521 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Jan 30 13:13:07.733690 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:13:07.739465 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:13:07.740720 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:13:07.743740 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:13:07.746270 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:13:07.746442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:13:07.747800 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:13:07.747930 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:13:07.750251 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:13:07.753245 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:13:07.761197 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:13:07.764856 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:13:07.764984 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:13:07.769701 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:13:07.778449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:13:07.790763 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:13:07.792961 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:13:07.796058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:13:07.797675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:13:07.800760 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:13:07.802487 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:13:07.802769 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:13:07.804128 systemd[1]: Finished ensure-sysext.service. Jan 30 13:13:07.805191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:13:07.805327 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:13:07.807797 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:13:07.807939 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:13:07.809750 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:13:07.809880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:13:07.817197 augenrules[1382]: No rules Jan 30 13:13:07.822902 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:13:07.823111 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:13:07.826676 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:13:07.826767 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:13:07.834515 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:13:07.861824 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 13:13:07.871218 systemd-resolved[1309]: Positive Trust Anchors: Jan 30 13:13:07.877926 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1340) Jan 30 13:13:07.871234 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:13:07.871281 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:13:07.894210 systemd-resolved[1309]: Defaulting to hostname 'linux'. Jan 30 13:13:07.896302 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:13:07.897492 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:13:07.902321 systemd-networkd[1374]: lo: Link UP Jan 30 13:13:07.902664 systemd-networkd[1374]: lo: Gained carrier Jan 30 13:13:07.910221 systemd-networkd[1374]: Enumeration completed Jan 30 13:13:07.910470 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:13:07.911235 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:13:07.911317 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:13:07.912057 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:13:07.912277 systemd-networkd[1374]: eth0: Link UP Jan 30 13:13:07.912413 systemd-networkd[1374]: eth0: Gained carrier Jan 30 13:13:07.912472 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:13:07.913452 systemd[1]: Reached target network.target - Network. Jan 30 13:13:07.914229 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:13:07.923701 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:13:07.935377 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:13:07.935742 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:13:07.936497 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Jan 30 13:13:07.939432 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:13:07.939584 systemd-timesyncd[1391]: Initial clock synchronization to Thu 2025-01-30 13:13:08.280983 UTC. Jan 30 13:13:07.944589 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:13:07.959467 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:13:07.970695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:13:07.979800 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:13:07.982406 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:13:08.021430 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:13:08.030268 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:13:08.058145 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:13:08.059461 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:13:08.060309 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:13:08.062568 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:13:08.063567 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:13:08.064657 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:13:08.065536 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:13:08.066516 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:13:08.067549 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:13:08.067581 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:13:08.068274 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:13:08.070196 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:13:08.072612 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:13:08.080331 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:13:08.082422 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:13:08.083697 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:13:08.084658 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:13:08.085420 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:13:08.086172 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:13:08.086210 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:13:08.087142 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:13:08.089024 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:13:08.090653 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:13:08.092213 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:13:08.094584 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:13:08.095505 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:13:08.096612 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:13:08.102533 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:13:08.105672 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:13:08.107949 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:13:08.109161 jq[1416]: false Jan 30 13:13:08.115360 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:13:08.121781 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:13:08.122217 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:13:08.125465 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:13:08.128031 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:13:08.128281 extend-filesystems[1417]: Found loop3 Jan 30 13:13:08.130510 extend-filesystems[1417]: Found loop4 Jan 30 13:13:08.130510 extend-filesystems[1417]: Found loop5 Jan 30 13:13:08.130510 extend-filesystems[1417]: Found vda Jan 30 13:13:08.130510 extend-filesystems[1417]: Found vda1 Jan 30 13:13:08.130510 extend-filesystems[1417]: Found vda2 Jan 30 13:13:08.130510 extend-filesystems[1417]: Found vda3 Jan 30 13:13:08.130510 extend-filesystems[1417]: Found usr Jan 30 13:13:08.130510 extend-filesystems[1417]: Found vda4 Jan 30 13:13:08.130510 extend-filesystems[1417]: Found vda6 Jan 30 13:13:08.130510 extend-filesystems[1417]: Found vda7 Jan 30 13:13:08.130510 extend-filesystems[1417]: Found vda9 Jan 30 13:13:08.130510 extend-filesystems[1417]: Checking size of /dev/vda9 Jan 30 13:13:08.130036 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:13:08.129477 dbus-daemon[1415]: [system] SELinux support is enabled Jan 30 13:13:08.136431 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:13:08.150776 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:13:08.155883 jq[1429]: true Jan 30 13:13:08.152609 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:13:08.154845 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:13:08.155526 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:13:08.168064 extend-filesystems[1417]: Resized partition /dev/vda9 Jan 30 13:13:08.175316 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:13:08.175350 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:13:08.177443 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:13:08.177467 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:13:08.178405 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1358) Jan 30 13:13:08.182276 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:13:08.183737 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 13:13:08.184409 systemd-logind[1423]: New seat seat0. Jan 30 13:13:08.186241 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:13:08.188226 (ntainerd)[1441]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:13:08.190457 jq[1440]: true Jan 30 13:13:08.191028 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:13:08.191226 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:13:08.191872 update_engine[1426]: I20250130 13:13:08.191708 1426 main.cc:92] Flatcar Update Engine starting Jan 30 13:13:08.195427 update_engine[1426]: I20250130 13:13:08.195355 1426 update_check_scheduler.cc:74] Next update check in 3m21s Jan 30 13:13:08.201134 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:13:08.200304 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:13:08.206272 tar[1439]: linux-arm64/LICENSE Jan 30 13:13:08.207511 tar[1439]: linux-arm64/helm Jan 30 13:13:08.216651 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:13:08.256037 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:13:08.273642 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:13:08.274724 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:13:08.274724 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:13:08.274724 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:13:08.279483 extend-filesystems[1417]: Resized filesystem in /dev/vda9 Jan 30 13:13:08.276278 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:13:08.276481 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:13:08.281651 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:13:08.283264 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:13:08.287468 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:13:08.414124 sshd_keygen[1428]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:13:08.428853 containerd[1441]: time="2025-01-30T13:13:08.428733684Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:13:08.437429 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:13:08.450697 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:13:08.457680 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:13:08.458015 containerd[1441]: time="2025-01-30T13:13:08.457409919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:13:08.458107 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:13:08.458990 containerd[1441]: time="2025-01-30T13:13:08.458739254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:13:08.458990 containerd[1441]: time="2025-01-30T13:13:08.458772622Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:13:08.458990 containerd[1441]: time="2025-01-30T13:13:08.458791600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:13:08.459069 containerd[1441]: time="2025-01-30T13:13:08.458987553Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:13:08.459069 containerd[1441]: time="2025-01-30T13:13:08.459008408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:13:08.459105 containerd[1441]: time="2025-01-30T13:13:08.459073893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:13:08.459105 containerd[1441]: time="2025-01-30T13:13:08.459087782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:13:08.461777 containerd[1441]: time="2025-01-30T13:13:08.459250409Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:13:08.461777 containerd[1441]: time="2025-01-30T13:13:08.459276811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:13:08.461777 containerd[1441]: time="2025-01-30T13:13:08.459291618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:13:08.461777 containerd[1441]: time="2025-01-30T13:13:08.459302880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:13:08.461777 containerd[1441]: time="2025-01-30T13:13:08.459401982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:13:08.461777 containerd[1441]: time="2025-01-30T13:13:08.459612784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:13:08.461777 containerd[1441]: time="2025-01-30T13:13:08.459716933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:13:08.461777 containerd[1441]: time="2025-01-30T13:13:08.459730614Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:13:08.461777 containerd[1441]: time="2025-01-30T13:13:08.459805358Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:13:08.461777 containerd[1441]: time="2025-01-30T13:13:08.459848278Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:13:08.463403 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:13:08.466325 containerd[1441]: time="2025-01-30T13:13:08.466283984Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:13:08.466472 containerd[1441]: time="2025-01-30T13:13:08.466377497Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:13:08.466472 containerd[1441]: time="2025-01-30T13:13:08.466451782Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:13:08.466472 containerd[1441]: time="2025-01-30T13:13:08.466470844Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:13:08.466558 containerd[1441]: time="2025-01-30T13:13:08.466487611Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:13:08.466693 containerd[1441]: time="2025-01-30T13:13:08.466660665Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:13:08.467022 containerd[1441]: time="2025-01-30T13:13:08.466996263Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:13:08.467141 containerd[1441]: time="2025-01-30T13:13:08.467123853Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:13:08.467173 containerd[1441]: time="2025-01-30T13:13:08.467146210Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:13:08.467173 containerd[1441]: time="2025-01-30T13:13:08.467163019Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:13:08.467214 containerd[1441]: time="2025-01-30T13:13:08.467177617Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:13:08.467214 containerd[1441]: time="2025-01-30T13:13:08.467191298Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:13:08.467214 containerd[1441]: time="2025-01-30T13:13:08.467203894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:13:08.467265 containerd[1441]: time="2025-01-30T13:13:08.467217534Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:13:08.467265 containerd[1441]: time="2025-01-30T13:13:08.467233634Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:13:08.467265 containerd[1441]: time="2025-01-30T13:13:08.467247189Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:13:08.467265 containerd[1441]: time="2025-01-30T13:13:08.467261871Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:13:08.467335 containerd[1441]: time="2025-01-30T13:13:08.467274885Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:13:08.467335 containerd[1441]: time="2025-01-30T13:13:08.467295948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467335 containerd[1441]: time="2025-01-30T13:13:08.467310088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467335 containerd[1441]: time="2025-01-30T13:13:08.467322434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467588 containerd[1441]: time="2025-01-30T13:13:08.467337157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467588 containerd[1441]: time="2025-01-30T13:13:08.467350338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467588 containerd[1441]: time="2025-01-30T13:13:08.467388377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467588 containerd[1441]: time="2025-01-30T13:13:08.467400848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467588 containerd[1441]: time="2025-01-30T13:13:08.467414446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467588 containerd[1441]: time="2025-01-30T13:13:08.467429795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467588 containerd[1441]: time="2025-01-30T13:13:08.467445853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467588 containerd[1441]: time="2025-01-30T13:13:08.467459159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467588 containerd[1441]: time="2025-01-30T13:13:08.467472422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467588 containerd[1441]: time="2025-01-30T13:13:08.467484560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467588 containerd[1441]: time="2025-01-30T13:13:08.467499200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:13:08.467588 containerd[1441]: time="2025-01-30T13:13:08.467520556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467588 containerd[1441]: time="2025-01-30T13:13:08.467533819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.467588 containerd[1441]: time="2025-01-30T13:13:08.467559471Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:13:08.467914 containerd[1441]: time="2025-01-30T13:13:08.467875632Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:13:08.467914 containerd[1441]: time="2025-01-30T13:13:08.467897488Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:13:08.467914 containerd[1441]: time="2025-01-30T13:13:08.467908124Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:13:08.468000 containerd[1441]: time="2025-01-30T13:13:08.467923139Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:13:08.468000 containerd[1441]: time="2025-01-30T13:13:08.467996423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.468302 containerd[1441]: time="2025-01-30T13:13:08.468010188Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:13:08.468302 containerd[1441]: time="2025-01-30T13:13:08.468027831Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:13:08.468302 containerd[1441]: time="2025-01-30T13:13:08.468044932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:13:08.468533 containerd[1441]: time="2025-01-30T13:13:08.468481718Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:13:08.468667 containerd[1441]: time="2025-01-30T13:13:08.468539236Z" level=info msg="Connect containerd service" Jan 30 13:13:08.468667 containerd[1441]: time="2025-01-30T13:13:08.468573980Z" level=info msg="using legacy CRI server" Jan 30 13:13:08.468667 containerd[1441]: time="2025-01-30T13:13:08.468581988Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:13:08.469302 containerd[1441]: time="2025-01-30T13:13:08.469274622Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:13:08.473117 containerd[1441]: time="2025-01-30T13:13:08.473068467Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:13:08.475202 containerd[1441]: time="2025-01-30T13:13:08.475174314Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:13:08.475283 containerd[1441]: time="2025-01-30T13:13:08.475229121Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:13:08.475465 containerd[1441]: time="2025-01-30T13:13:08.475419735Z" level=info msg="Start subscribing containerd event" Jan 30 13:13:08.475502 containerd[1441]: time="2025-01-30T13:13:08.475471247Z" level=info msg="Start recovering state" Jan 30 13:13:08.475652 containerd[1441]: time="2025-01-30T13:13:08.475548410Z" level=info msg="Start event monitor" Jan 30 13:13:08.475652 containerd[1441]: time="2025-01-30T13:13:08.475563676Z" level=info msg="Start snapshots syncer" Jan 30 13:13:08.475652 containerd[1441]: time="2025-01-30T13:13:08.475576064Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:13:08.475652 containerd[1441]: time="2025-01-30T13:13:08.475583780Z" level=info msg="Start streaming server" Jan 30 13:13:08.476200 containerd[1441]: time="2025-01-30T13:13:08.476018063Z" level=info msg="containerd successfully booted in 0.049182s" Jan 30 13:13:08.476295 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:13:08.484073 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:13:08.500763 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:13:08.503107 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 13:13:08.504304 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:13:08.631831 tar[1439]: linux-arm64/README.md Jan 30 13:13:08.645886 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:13:09.390337 systemd-networkd[1374]: eth0: Gained IPv6LL Jan 30 13:13:09.392789 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:13:09.394204 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:13:09.407626 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:13:09.409860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:13:09.411747 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:13:09.426477 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:13:09.426665 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:13:09.428709 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:13:09.430281 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:13:09.987904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:13:09.989534 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:13:09.993055 (kubelet)[1529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:13:09.993540 systemd[1]: Startup finished in 564ms (kernel) + 4.694s (initrd) + 3.633s (userspace) = 8.893s. Jan 30 13:13:10.003161 agetty[1503]: failed to open credentials directory Jan 30 13:13:10.004051 agetty[1502]: failed to open credentials directory Jan 30 13:13:10.463803 kubelet[1529]: E0130 13:13:10.463678 1529 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:13:10.466271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:13:10.466451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:13:14.420160 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:13:14.422011 systemd[1]: Started sshd@0-10.0.0.147:22-10.0.0.1:44218.service - OpenSSH per-connection server daemon (10.0.0.1:44218). Jan 30 13:13:14.488560 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 44218 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:13:14.490643 sshd-session[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:13:14.498646 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:13:14.509630 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:13:14.511466 systemd-logind[1423]: New session 1 of user core. Jan 30 13:13:14.519633 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:13:14.524741 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:13:14.530881 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:13:14.630910 systemd[1547]: Queued start job for default target default.target. Jan 30 13:13:14.642424 systemd[1547]: Created slice app.slice - User Application Slice. Jan 30 13:13:14.642471 systemd[1547]: Reached target paths.target - Paths. Jan 30 13:13:14.642483 systemd[1547]: Reached target timers.target - Timers. Jan 30 13:13:14.643856 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:13:14.656141 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:13:14.656254 systemd[1547]: Reached target sockets.target - Sockets. Jan 30 13:13:14.656269 systemd[1547]: Reached target basic.target - Basic System. Jan 30 13:13:14.656310 systemd[1547]: Reached target default.target - Main User Target. Jan 30 13:13:14.656338 systemd[1547]: Startup finished in 118ms. Jan 30 13:13:14.656595 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:13:14.658836 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:13:14.732821 systemd[1]: Started sshd@1-10.0.0.147:22-10.0.0.1:44250.service - OpenSSH per-connection server daemon (10.0.0.1:44250). Jan 30 13:13:14.775115 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 44250 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:13:14.776909 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:13:14.782890 systemd-logind[1423]: New session 2 of user core. Jan 30 13:13:14.792536 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:13:14.847053 sshd[1560]: Connection closed by 10.0.0.1 port 44250 Jan 30 13:13:14.847679 sshd-session[1558]: pam_unix(sshd:session): session closed for user core Jan 30 13:13:14.864187 systemd[1]: sshd@1-10.0.0.147:22-10.0.0.1:44250.service: Deactivated successfully. Jan 30 13:13:14.867232 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:13:14.870906 systemd[1]: Started sshd@2-10.0.0.147:22-10.0.0.1:44276.service - OpenSSH per-connection server daemon (10.0.0.1:44276). Jan 30 13:13:14.875290 systemd-logind[1423]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:13:14.876555 systemd-logind[1423]: Removed session 2. Jan 30 13:13:14.934832 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 44276 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:13:14.936508 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:13:14.940633 systemd-logind[1423]: New session 3 of user core. Jan 30 13:13:14.952559 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:13:15.002255 sshd[1567]: Connection closed by 10.0.0.1 port 44276 Jan 30 13:13:15.003170 sshd-session[1565]: pam_unix(sshd:session): session closed for user core Jan 30 13:13:15.017061 systemd[1]: sshd@2-10.0.0.147:22-10.0.0.1:44276.service: Deactivated successfully. Jan 30 13:13:15.019688 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:13:15.021827 systemd-logind[1423]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:13:15.025539 systemd[1]: Started sshd@3-10.0.0.147:22-10.0.0.1:44294.service - OpenSSH per-connection server daemon (10.0.0.1:44294). Jan 30 13:13:15.026861 systemd-logind[1423]: Removed session 3. Jan 30 13:13:15.069532 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 44294 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:13:15.071061 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:13:15.076572 systemd-logind[1423]: New session 4 of user core. Jan 30 13:13:15.085561 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:13:15.141795 sshd[1574]: Connection closed by 10.0.0.1 port 44294 Jan 30 13:13:15.141532 sshd-session[1572]: pam_unix(sshd:session): session closed for user core Jan 30 13:13:15.154876 systemd[1]: sshd@3-10.0.0.147:22-10.0.0.1:44294.service: Deactivated successfully. Jan 30 13:13:15.156329 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:13:15.157652 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:13:15.159093 systemd[1]: Started sshd@4-10.0.0.147:22-10.0.0.1:44314.service - OpenSSH per-connection server daemon (10.0.0.1:44314). Jan 30 13:13:15.159991 systemd-logind[1423]: Removed session 4. Jan 30 13:13:15.207257 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 44314 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:13:15.208625 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:13:15.214643 systemd-logind[1423]: New session 5 of user core. Jan 30 13:13:15.223578 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:13:15.308026 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:13:15.308643 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:13:15.327989 sudo[1582]: pam_unix(sudo:session): session closed for user root Jan 30 13:13:15.330259 sshd[1581]: Connection closed by 10.0.0.1 port 44314 Jan 30 13:13:15.330700 sshd-session[1579]: pam_unix(sshd:session): session closed for user core Jan 30 13:13:15.342151 systemd[1]: sshd@4-10.0.0.147:22-10.0.0.1:44314.service: Deactivated successfully. Jan 30 13:13:15.345161 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:13:15.346490 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:13:15.348332 systemd[1]: Started sshd@5-10.0.0.147:22-10.0.0.1:44332.service - OpenSSH per-connection server daemon (10.0.0.1:44332). Jan 30 13:13:15.350135 systemd-logind[1423]: Removed session 5. Jan 30 13:13:15.391979 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 44332 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:13:15.393439 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:13:15.398157 systemd-logind[1423]: New session 6 of user core. Jan 30 13:13:15.405529 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:13:15.460813 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:13:15.461089 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:13:15.464216 sudo[1591]: pam_unix(sudo:session): session closed for user root Jan 30 13:13:15.469424 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:13:15.469708 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:13:15.493551 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:13:15.528445 augenrules[1613]: No rules Jan 30 13:13:15.529946 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:13:15.530172 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:13:15.531237 sudo[1590]: pam_unix(sudo:session): session closed for user root Jan 30 13:13:15.535400 sshd[1589]: Connection closed by 10.0.0.1 port 44332 Jan 30 13:13:15.537274 sshd-session[1587]: pam_unix(sshd:session): session closed for user core Jan 30 13:13:15.551015 systemd[1]: sshd@5-10.0.0.147:22-10.0.0.1:44332.service: Deactivated successfully. Jan 30 13:13:15.552888 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:13:15.554832 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:13:15.559649 systemd[1]: Started sshd@6-10.0.0.147:22-10.0.0.1:44344.service - OpenSSH per-connection server daemon (10.0.0.1:44344). Jan 30 13:13:15.561212 systemd-logind[1423]: Removed session 6. Jan 30 13:13:15.602099 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 44344 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:13:15.603609 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:13:15.610809 systemd-logind[1423]: New session 7 of user core. Jan 30 13:13:15.624877 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:13:15.677609 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:13:15.677875 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:13:16.166665 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:13:16.166774 (dockerd)[1644]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:13:16.442183 dockerd[1644]: time="2025-01-30T13:13:16.442057009Z" level=info msg="Starting up" Jan 30 13:13:16.605406 dockerd[1644]: time="2025-01-30T13:13:16.605349499Z" level=info msg="Loading containers: start." Jan 30 13:13:16.788386 kernel: Initializing XFRM netlink socket Jan 30 13:13:16.877684 systemd-networkd[1374]: docker0: Link UP Jan 30 13:13:16.930184 dockerd[1644]: time="2025-01-30T13:13:16.930138725Z" level=info msg="Loading containers: done." Jan 30 13:13:16.948975 dockerd[1644]: time="2025-01-30T13:13:16.948923117Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:13:16.949120 dockerd[1644]: time="2025-01-30T13:13:16.949028767Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:13:16.949227 dockerd[1644]: time="2025-01-30T13:13:16.949200249Z" level=info msg="Daemon has completed initialization" Jan 30 13:13:16.978617 dockerd[1644]: time="2025-01-30T13:13:16.978498311Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:13:16.978890 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:13:17.446968 containerd[1441]: time="2025-01-30T13:13:17.446911045Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 13:13:18.076109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount576501059.mount: Deactivated successfully. Jan 30 13:13:18.937512 containerd[1441]: time="2025-01-30T13:13:18.937467005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:18.938866 containerd[1441]: time="2025-01-30T13:13:18.938823409Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=26220950" Jan 30 13:13:18.940380 containerd[1441]: time="2025-01-30T13:13:18.940326484Z" level=info msg="ImageCreate event name:\"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:18.943047 containerd[1441]: time="2025-01-30T13:13:18.943017287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:18.944211 containerd[1441]: time="2025-01-30T13:13:18.944185518Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"26217748\" in 1.497228909s" Jan 30 13:13:18.944402 containerd[1441]: time="2025-01-30T13:13:18.944322117Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\"" Jan 30 13:13:18.945072 containerd[1441]: time="2025-01-30T13:13:18.945046978Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 13:13:20.166897 containerd[1441]: time="2025-01-30T13:13:20.166848825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:20.167912 containerd[1441]: time="2025-01-30T13:13:20.167838553Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=22527109" Jan 30 13:13:20.168444 containerd[1441]: time="2025-01-30T13:13:20.168416810Z" level=info msg="ImageCreate event name:\"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:20.171807 containerd[1441]: time="2025-01-30T13:13:20.171742963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:20.172864 containerd[1441]: time="2025-01-30T13:13:20.172729343Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"23968433\" in 1.227649624s" Jan 30 13:13:20.172864 containerd[1441]: time="2025-01-30T13:13:20.172761941Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\"" Jan 30 13:13:20.173843 containerd[1441]: time="2025-01-30T13:13:20.173816019Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 13:13:20.717141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:13:20.726578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:13:20.828952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:13:20.833061 (kubelet)[1907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:13:20.881163 kubelet[1907]: E0130 13:13:20.881100 1907 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:13:20.884176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:13:20.884344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:13:21.596385 containerd[1441]: time="2025-01-30T13:13:21.596094990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:21.596827 containerd[1441]: time="2025-01-30T13:13:21.596739248Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=17481115" Jan 30 13:13:21.597499 containerd[1441]: time="2025-01-30T13:13:21.597469105Z" level=info msg="ImageCreate event name:\"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:21.600797 containerd[1441]: time="2025-01-30T13:13:21.600756687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:21.601528 containerd[1441]: time="2025-01-30T13:13:21.601401469Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"18922457\" in 1.427550924s" Jan 30 13:13:21.601528 containerd[1441]: time="2025-01-30T13:13:21.601430768Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\"" Jan 30 13:13:21.602054 containerd[1441]: time="2025-01-30T13:13:21.602006190Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 13:13:22.716783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3566786133.mount: Deactivated successfully. Jan 30 13:13:22.933433 containerd[1441]: time="2025-01-30T13:13:22.933376190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:22.934364 containerd[1441]: time="2025-01-30T13:13:22.934326816Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364399" Jan 30 13:13:22.935016 containerd[1441]: time="2025-01-30T13:13:22.934993139Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:22.936794 containerd[1441]: time="2025-01-30T13:13:22.936742717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:22.937436 containerd[1441]: time="2025-01-30T13:13:22.937411899Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.335373073s" Jan 30 13:13:22.937709 containerd[1441]: time="2025-01-30T13:13:22.937526288Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 30 13:13:22.938367 containerd[1441]: time="2025-01-30T13:13:22.938345936Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 13:13:23.618935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543992782.mount: Deactivated successfully. Jan 30 13:13:24.463975 containerd[1441]: time="2025-01-30T13:13:24.463919985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:24.464531 containerd[1441]: time="2025-01-30T13:13:24.464474047Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jan 30 13:13:24.468213 containerd[1441]: time="2025-01-30T13:13:24.468148820Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:24.472464 containerd[1441]: time="2025-01-30T13:13:24.471746768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:24.472556 containerd[1441]: time="2025-01-30T13:13:24.472494764Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.534045688s" Jan 30 13:13:24.472556 containerd[1441]: time="2025-01-30T13:13:24.472531749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 30 13:13:24.473026 containerd[1441]: time="2025-01-30T13:13:24.472966774Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:13:24.988823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2877758617.mount: Deactivated successfully. Jan 30 13:13:24.994393 containerd[1441]: time="2025-01-30T13:13:24.994318796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:24.996537 containerd[1441]: time="2025-01-30T13:13:24.996474662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 30 13:13:24.997537 containerd[1441]: time="2025-01-30T13:13:24.997475649Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:25.001382 containerd[1441]: time="2025-01-30T13:13:25.001329481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:25.002293 containerd[1441]: time="2025-01-30T13:13:25.002036157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 529.031518ms" Jan 30 13:13:25.002293 containerd[1441]: time="2025-01-30T13:13:25.002068338Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 30 13:13:25.002650 containerd[1441]: time="2025-01-30T13:13:25.002605701Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 13:13:25.607880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2813215311.mount: Deactivated successfully. Jan 30 13:13:27.232395 containerd[1441]: time="2025-01-30T13:13:27.232266905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:27.234352 containerd[1441]: time="2025-01-30T13:13:27.234295059Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Jan 30 13:13:27.235994 containerd[1441]: time="2025-01-30T13:13:27.235944618Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:27.239355 containerd[1441]: time="2025-01-30T13:13:27.239301730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:27.240178 containerd[1441]: time="2025-01-30T13:13:27.240116476Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.23747121s" Jan 30 13:13:27.240178 containerd[1441]: time="2025-01-30T13:13:27.240148945Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 30 13:13:31.134635 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:13:31.143557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:13:31.254549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:13:31.276674 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:13:31.316375 kubelet[2068]: E0130 13:13:31.316293 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:13:31.319219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:13:31.319525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:13:32.307177 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:13:32.321622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:13:32.346109 systemd[1]: Reloading requested from client PID 2083 ('systemctl') (unit session-7.scope)... Jan 30 13:13:32.346126 systemd[1]: Reloading... Jan 30 13:13:32.427418 zram_generator::config[2125]: No configuration found. Jan 30 13:13:32.594534 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:13:32.648277 systemd[1]: Reloading finished in 301 ms. Jan 30 13:13:32.690005 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:13:32.692791 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:13:32.693007 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:13:32.694581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:13:32.796924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:13:32.801047 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:13:32.841780 kubelet[2169]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:13:32.841780 kubelet[2169]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:13:32.841780 kubelet[2169]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:13:32.841780 kubelet[2169]: I0130 13:13:32.841601 2169 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:13:33.246592 kubelet[2169]: I0130 13:13:33.246553 2169 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:13:33.246592 kubelet[2169]: I0130 13:13:33.246583 2169 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:13:33.246877 kubelet[2169]: I0130 13:13:33.246862 2169 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:13:33.283216 kubelet[2169]: E0130 13:13:33.283161 2169 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:13:33.284873 kubelet[2169]: I0130 13:13:33.284774 2169 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:13:33.292222 kubelet[2169]: E0130 13:13:33.292178 2169 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:13:33.292222 kubelet[2169]: I0130 13:13:33.292210 2169 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:13:33.294851 kubelet[2169]: I0130 13:13:33.294820 2169 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:13:33.295055 kubelet[2169]: I0130 13:13:33.295019 2169 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:13:33.295206 kubelet[2169]: I0130 13:13:33.295047 2169 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:13:33.295288 kubelet[2169]: I0130 13:13:33.295270 2169 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:13:33.295288 kubelet[2169]: I0130 13:13:33.295279 2169 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:13:33.295517 kubelet[2169]: I0130 13:13:33.295489 2169 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:13:33.298591 kubelet[2169]: I0130 13:13:33.298559 2169 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:13:33.298591 kubelet[2169]: I0130 13:13:33.298587 2169 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:13:33.298649 kubelet[2169]: I0130 13:13:33.298615 2169 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:13:33.298649 kubelet[2169]: I0130 13:13:33.298625 2169 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:13:33.299363 kubelet[2169]: W0130 13:13:33.299303 2169 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 30 13:13:33.299404 kubelet[2169]: E0130 13:13:33.299381 2169 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:13:33.302385 kubelet[2169]: W0130 13:13:33.302337 2169 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 30 13:13:33.302520 kubelet[2169]: E0130 13:13:33.302490 2169 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:13:33.307730 kubelet[2169]: I0130 13:13:33.306617 2169 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:13:33.307730 kubelet[2169]: I0130 13:13:33.307409 2169 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:13:33.310927 kubelet[2169]: W0130 13:13:33.310675 2169 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:13:33.311583 kubelet[2169]: I0130 13:13:33.311552 2169 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:13:33.311634 kubelet[2169]: I0130 13:13:33.311590 2169 server.go:1287] "Started kubelet" Jan 30 13:13:33.312596 kubelet[2169]: I0130 13:13:33.312126 2169 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:13:33.312596 kubelet[2169]: I0130 13:13:33.312208 2169 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:13:33.312596 kubelet[2169]: I0130 13:13:33.312475 2169 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:13:33.312734 kubelet[2169]: I0130 13:13:33.312712 2169 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:13:33.312935 kubelet[2169]: I0130 13:13:33.312921 2169 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:13:33.313069 kubelet[2169]: I0130 13:13:33.313055 2169 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:13:33.313167 kubelet[2169]: I0130 13:13:33.313141 2169 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:13:33.314321 kubelet[2169]: I0130 13:13:33.314299 2169 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:13:33.314448 kubelet[2169]: E0130 13:13:33.314307 2169 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:13:33.314667 kubelet[2169]: W0130 13:13:33.314635 2169 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 30 13:13:33.315251 kubelet[2169]: E0130 13:13:33.315230 2169 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:13:33.315327 kubelet[2169]: E0130 13:13:33.315046 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="200ms" Jan 30 13:13:33.315399 kubelet[2169]: I0130 13:13:33.315135 2169 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:13:33.315461 kubelet[2169]: E0130 13:13:33.314773 2169 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.147:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.147:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7aa10ccfcc89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:13:33.311569033 +0000 UTC m=+0.505378147,LastTimestamp:2025-01-30 13:13:33.311569033 +0000 UTC m=+0.505378147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:13:33.317273 kubelet[2169]: E0130 13:13:33.317252 2169 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:13:33.317395 kubelet[2169]: I0130 13:13:33.317253 2169 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:13:33.317476 kubelet[2169]: I0130 13:13:33.317465 2169 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:13:33.317620 kubelet[2169]: I0130 13:13:33.317591 2169 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:13:33.328846 kubelet[2169]: I0130 13:13:33.328807 2169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:13:33.329934 kubelet[2169]: I0130 13:13:33.329915 2169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:13:33.330023 kubelet[2169]: I0130 13:13:33.330014 2169 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:13:33.330082 kubelet[2169]: I0130 13:13:33.330074 2169 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:13:33.330137 kubelet[2169]: I0130 13:13:33.330128 2169 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:13:33.330374 kubelet[2169]: E0130 13:13:33.330213 2169 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:13:33.332412 kubelet[2169]: I0130 13:13:33.332284 2169 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:13:33.332412 kubelet[2169]: I0130 13:13:33.332298 2169 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:13:33.332412 kubelet[2169]: I0130 13:13:33.332312 2169 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:13:33.333566 kubelet[2169]: W0130 13:13:33.333489 2169 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 30 13:13:33.333566 kubelet[2169]: E0130 13:13:33.333520 2169 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:13:33.409696 kubelet[2169]: I0130 13:13:33.409650 2169 policy_none.go:49] "None policy: Start" Jan 30 13:13:33.409696 kubelet[2169]: I0130 13:13:33.409684 2169 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:13:33.409696 kubelet[2169]: I0130 13:13:33.409696 2169 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:13:33.414571 kubelet[2169]: E0130 13:13:33.414538 2169 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:13:33.415207 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:13:33.430399 kubelet[2169]: E0130 13:13:33.430370 2169 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:13:33.430634 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:13:33.433325 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:13:33.451626 kubelet[2169]: I0130 13:13:33.451166 2169 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:13:33.451626 kubelet[2169]: I0130 13:13:33.451387 2169 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:13:33.451626 kubelet[2169]: I0130 13:13:33.451399 2169 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:13:33.451781 kubelet[2169]: I0130 13:13:33.451660 2169 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:13:33.453424 kubelet[2169]: E0130 13:13:33.453329 2169 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:13:33.453424 kubelet[2169]: E0130 13:13:33.453393 2169 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:13:33.516506 kubelet[2169]: E0130 13:13:33.516384 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="400ms" Jan 30 13:13:33.553571 kubelet[2169]: I0130 13:13:33.553526 2169 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:13:33.553998 kubelet[2169]: E0130 13:13:33.553963 2169 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Jan 30 13:13:33.639686 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 30 13:13:33.662817 kubelet[2169]: E0130 13:13:33.662785 2169 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:13:33.665974 systemd[1]: Created slice kubepods-burstable-podd66afb3e8401bdb68e36acb99e14b3f4.slice - libcontainer container kubepods-burstable-podd66afb3e8401bdb68e36acb99e14b3f4.slice. Jan 30 13:13:33.685659 kubelet[2169]: E0130 13:13:33.685475 2169 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:13:33.687743 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 30 13:13:33.689418 kubelet[2169]: E0130 13:13:33.689375 2169 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:13:33.716752 kubelet[2169]: I0130 13:13:33.716698 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:13:33.716752 kubelet[2169]: I0130 13:13:33.716736 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d66afb3e8401bdb68e36acb99e14b3f4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d66afb3e8401bdb68e36acb99e14b3f4\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:13:33.716752 kubelet[2169]: I0130 13:13:33.716757 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:13:33.716920 kubelet[2169]: I0130 13:13:33.716780 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:13:33.716920 kubelet[2169]: I0130 13:13:33.716797 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:13:33.716920 kubelet[2169]: I0130 13:13:33.716814 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:13:33.716920 kubelet[2169]: I0130 13:13:33.716829 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:13:33.716920 kubelet[2169]: I0130 13:13:33.716844 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d66afb3e8401bdb68e36acb99e14b3f4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d66afb3e8401bdb68e36acb99e14b3f4\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:13:33.717021 kubelet[2169]: I0130 13:13:33.716882 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d66afb3e8401bdb68e36acb99e14b3f4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d66afb3e8401bdb68e36acb99e14b3f4\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:13:33.755837 kubelet[2169]: I0130 13:13:33.755789 2169 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:13:33.756138 kubelet[2169]: E0130 13:13:33.756091 2169 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Jan 30 13:13:33.917169 kubelet[2169]: E0130 13:13:33.917010 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="800ms" Jan 30 13:13:33.963547 kubelet[2169]: E0130 13:13:33.963504 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:33.964271 containerd[1441]: time="2025-01-30T13:13:33.964219107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 30 13:13:33.986484 kubelet[2169]: E0130 13:13:33.986448 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:33.987279 containerd[1441]: time="2025-01-30T13:13:33.987113234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d66afb3e8401bdb68e36acb99e14b3f4,Namespace:kube-system,Attempt:0,}" Jan 30 13:13:33.990755 kubelet[2169]: E0130 13:13:33.990722 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:33.991302 containerd[1441]: time="2025-01-30T13:13:33.991094225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 30 13:13:34.129757 kubelet[2169]: E0130 13:13:34.129629 2169 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.147:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.147:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7aa10ccfcc89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:13:33.311569033 +0000 UTC m=+0.505378147,LastTimestamp:2025-01-30 13:13:33.311569033 +0000 UTC m=+0.505378147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:13:34.157772 kubelet[2169]: I0130 13:13:34.157698 2169 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:13:34.158053 kubelet[2169]: E0130 13:13:34.158020 2169 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Jan 30 13:13:34.178714 kubelet[2169]: W0130 13:13:34.178589 2169 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 30 13:13:34.178714 kubelet[2169]: E0130 13:13:34.178657 2169 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:13:34.227771 kubelet[2169]: W0130 13:13:34.227713 2169 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 30 13:13:34.227872 kubelet[2169]: E0130 13:13:34.227780 2169 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:13:34.418300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount87515622.mount: Deactivated successfully. Jan 30 13:13:34.423515 containerd[1441]: time="2025-01-30T13:13:34.423464404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:13:34.423933 containerd[1441]: time="2025-01-30T13:13:34.423880155Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 13:13:34.428899 containerd[1441]: time="2025-01-30T13:13:34.428762549Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:13:34.430879 containerd[1441]: time="2025-01-30T13:13:34.430792120Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:13:34.431512 containerd[1441]: time="2025-01-30T13:13:34.431477749Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:13:34.432584 containerd[1441]: time="2025-01-30T13:13:34.432530145Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:13:34.433357 containerd[1441]: time="2025-01-30T13:13:34.433165788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:13:34.435285 containerd[1441]: time="2025-01-30T13:13:34.435220913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:13:34.436390 containerd[1441]: time="2025-01-30T13:13:34.436331866Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 472.030676ms" Jan 30 13:13:34.437815 containerd[1441]: time="2025-01-30T13:13:34.437761321Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 446.608848ms" Jan 30 13:13:34.440237 containerd[1441]: time="2025-01-30T13:13:34.440186737Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 452.999393ms" Jan 30 13:13:34.600477 kubelet[2169]: W0130 13:13:34.600255 2169 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 30 13:13:34.600477 kubelet[2169]: E0130 13:13:34.600442 2169 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:13:34.623334 kubelet[2169]: W0130 13:13:34.623270 2169 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jan 30 13:13:34.623533 kubelet[2169]: E0130 13:13:34.623501 2169 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:13:34.645009 containerd[1441]: time="2025-01-30T13:13:34.643718534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:13:34.645009 containerd[1441]: time="2025-01-30T13:13:34.643758787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:13:34.645009 containerd[1441]: time="2025-01-30T13:13:34.643769241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:13:34.645009 containerd[1441]: time="2025-01-30T13:13:34.643870776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:13:34.645009 containerd[1441]: time="2025-01-30T13:13:34.642618635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:13:34.645009 containerd[1441]: time="2025-01-30T13:13:34.642749008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:13:34.645009 containerd[1441]: time="2025-01-30T13:13:34.642979314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:13:34.645597 containerd[1441]: time="2025-01-30T13:13:34.645069806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:13:34.652052 containerd[1441]: time="2025-01-30T13:13:34.651824923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:13:34.652052 containerd[1441]: time="2025-01-30T13:13:34.651987979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:13:34.652412 containerd[1441]: time="2025-01-30T13:13:34.652002999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:13:34.656517 containerd[1441]: time="2025-01-30T13:13:34.656359255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:13:34.673572 systemd[1]: Started cri-containerd-28c0b267780bc7b44e054d94513114b92f7e08f4b20254aeec35ceab25120122.scope - libcontainer container 28c0b267780bc7b44e054d94513114b92f7e08f4b20254aeec35ceab25120122. Jan 30 13:13:34.674807 systemd[1]: Started cri-containerd-60717adaa83f6d363e7c40a5646dd1a2d6418c16a5905223d867752ab0de1067.scope - libcontainer container 60717adaa83f6d363e7c40a5646dd1a2d6418c16a5905223d867752ab0de1067. Jan 30 13:13:34.678619 systemd[1]: Started cri-containerd-42c254421cf5079d5f0afc024197df4e61101a2222f470f00f286eb590fe3323.scope - libcontainer container 42c254421cf5079d5f0afc024197df4e61101a2222f470f00f286eb590fe3323. Jan 30 13:13:34.710827 containerd[1441]: time="2025-01-30T13:13:34.710689695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"28c0b267780bc7b44e054d94513114b92f7e08f4b20254aeec35ceab25120122\"" Jan 30 13:13:34.713088 kubelet[2169]: E0130 13:13:34.713063 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:34.716204 containerd[1441]: time="2025-01-30T13:13:34.715554907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"60717adaa83f6d363e7c40a5646dd1a2d6418c16a5905223d867752ab0de1067\"" Jan 30 13:13:34.716296 kubelet[2169]: E0130 13:13:34.716176 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:34.716616 containerd[1441]: time="2025-01-30T13:13:34.716587155Z" level=info msg="CreateContainer within sandbox \"28c0b267780bc7b44e054d94513114b92f7e08f4b20254aeec35ceab25120122\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:13:34.717300 containerd[1441]: time="2025-01-30T13:13:34.717240742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d66afb3e8401bdb68e36acb99e14b3f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"42c254421cf5079d5f0afc024197df4e61101a2222f470f00f286eb590fe3323\"" Jan 30 13:13:34.717906 kubelet[2169]: E0130 13:13:34.717879 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="1.6s" Jan 30 13:13:34.718243 kubelet[2169]: E0130 13:13:34.718126 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:34.718293 containerd[1441]: time="2025-01-30T13:13:34.718230094Z" level=info msg="CreateContainer within sandbox \"60717adaa83f6d363e7c40a5646dd1a2d6418c16a5905223d867752ab0de1067\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:13:34.720548 containerd[1441]: time="2025-01-30T13:13:34.720483642Z" level=info msg="CreateContainer within sandbox \"42c254421cf5079d5f0afc024197df4e61101a2222f470f00f286eb590fe3323\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:13:34.736568 containerd[1441]: time="2025-01-30T13:13:34.736516180Z" level=info msg="CreateContainer within sandbox \"28c0b267780bc7b44e054d94513114b92f7e08f4b20254aeec35ceab25120122\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b46d40ab82128174733cf47ec0ceb53e9b73da8dc3d18f6b01e94c54b8bc85d5\"" Jan 30 13:13:34.737202 containerd[1441]: time="2025-01-30T13:13:34.737168686Z" level=info msg="StartContainer for \"b46d40ab82128174733cf47ec0ceb53e9b73da8dc3d18f6b01e94c54b8bc85d5\"" Jan 30 13:13:34.744368 containerd[1441]: time="2025-01-30T13:13:34.744275109Z" level=info msg="CreateContainer within sandbox \"60717adaa83f6d363e7c40a5646dd1a2d6418c16a5905223d867752ab0de1067\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ee337524b88d93a186fbc518c4544c28706af142963d82ab8a739e55f7f59089\"" Jan 30 13:13:34.744932 containerd[1441]: time="2025-01-30T13:13:34.744904103Z" level=info msg="StartContainer for \"ee337524b88d93a186fbc518c4544c28706af142963d82ab8a739e55f7f59089\"" Jan 30 13:13:34.751814 containerd[1441]: time="2025-01-30T13:13:34.751754546Z" level=info msg="CreateContainer within sandbox \"42c254421cf5079d5f0afc024197df4e61101a2222f470f00f286eb590fe3323\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"682a3a2a16bfa18e36d53fd0b234934e8052e84f7fa1f89de138f1bd6d0819e4\"" Jan 30 13:13:34.752288 containerd[1441]: time="2025-01-30T13:13:34.752263020Z" level=info msg="StartContainer for \"682a3a2a16bfa18e36d53fd0b234934e8052e84f7fa1f89de138f1bd6d0819e4\"" Jan 30 13:13:34.764526 systemd[1]: Started cri-containerd-b46d40ab82128174733cf47ec0ceb53e9b73da8dc3d18f6b01e94c54b8bc85d5.scope - libcontainer container b46d40ab82128174733cf47ec0ceb53e9b73da8dc3d18f6b01e94c54b8bc85d5. Jan 30 13:13:34.767606 systemd[1]: Started cri-containerd-ee337524b88d93a186fbc518c4544c28706af142963d82ab8a739e55f7f59089.scope - libcontainer container ee337524b88d93a186fbc518c4544c28706af142963d82ab8a739e55f7f59089. Jan 30 13:13:34.790599 systemd[1]: Started cri-containerd-682a3a2a16bfa18e36d53fd0b234934e8052e84f7fa1f89de138f1bd6d0819e4.scope - libcontainer container 682a3a2a16bfa18e36d53fd0b234934e8052e84f7fa1f89de138f1bd6d0819e4. Jan 30 13:13:34.834458 containerd[1441]: time="2025-01-30T13:13:34.832288131Z" level=info msg="StartContainer for \"b46d40ab82128174733cf47ec0ceb53e9b73da8dc3d18f6b01e94c54b8bc85d5\" returns successfully" Jan 30 13:13:34.834458 containerd[1441]: time="2025-01-30T13:13:34.832301989Z" level=info msg="StartContainer for \"ee337524b88d93a186fbc518c4544c28706af142963d82ab8a739e55f7f59089\" returns successfully" Jan 30 13:13:34.873208 containerd[1441]: time="2025-01-30T13:13:34.873096922Z" level=info msg="StartContainer for \"682a3a2a16bfa18e36d53fd0b234934e8052e84f7fa1f89de138f1bd6d0819e4\" returns successfully" Jan 30 13:13:34.960024 kubelet[2169]: I0130 13:13:34.959726 2169 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:13:34.960334 kubelet[2169]: E0130 13:13:34.960179 2169 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Jan 30 13:13:35.340192 kubelet[2169]: E0130 13:13:35.340156 2169 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:13:35.340298 kubelet[2169]: E0130 13:13:35.340287 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:35.341959 kubelet[2169]: E0130 13:13:35.341932 2169 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:13:35.342098 kubelet[2169]: E0130 13:13:35.342033 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:35.343870 kubelet[2169]: E0130 13:13:35.343845 2169 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:13:35.343972 kubelet[2169]: E0130 13:13:35.343949 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:36.355966 kubelet[2169]: E0130 13:13:36.353859 2169 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:13:36.355966 kubelet[2169]: E0130 13:13:36.353929 2169 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:13:36.355966 kubelet[2169]: E0130 13:13:36.353982 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:36.355966 kubelet[2169]: E0130 13:13:36.354022 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:36.384404 kubelet[2169]: E0130 13:13:36.383941 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:13:36.562007 kubelet[2169]: I0130 13:13:36.561963 2169 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:13:36.571398 kubelet[2169]: I0130 13:13:36.571365 2169 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 13:13:36.614962 kubelet[2169]: I0130 13:13:36.614803 2169 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:13:36.624516 kubelet[2169]: E0130 13:13:36.624465 2169 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 13:13:36.624516 kubelet[2169]: I0130 13:13:36.624500 2169 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:13:36.626768 kubelet[2169]: E0130 13:13:36.626609 2169 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:13:36.626768 kubelet[2169]: I0130 13:13:36.626645 2169 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:13:36.629026 kubelet[2169]: E0130 13:13:36.628997 2169 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 30 13:13:37.301505 kubelet[2169]: I0130 13:13:37.301407 2169 apiserver.go:52] "Watching apiserver" Jan 30 13:13:37.314870 kubelet[2169]: I0130 13:13:37.314816 2169 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:13:38.289166 systemd[1]: Reloading requested from client PID 2451 ('systemctl') (unit session-7.scope)... Jan 30 13:13:38.289182 systemd[1]: Reloading... Jan 30 13:13:38.360373 zram_generator::config[2492]: No configuration found. Jan 30 13:13:38.410948 kubelet[2169]: I0130 13:13:38.410917 2169 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:13:38.426685 kubelet[2169]: E0130 13:13:38.426651 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:38.475828 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:13:38.539785 systemd[1]: Reloading finished in 250 ms. Jan 30 13:13:38.574051 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:13:38.591318 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:13:38.591590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:13:38.604598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:13:38.704466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:13:38.708907 (kubelet)[2532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:13:38.761478 kubelet[2532]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:13:38.761478 kubelet[2532]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:13:38.761478 kubelet[2532]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:13:38.762258 kubelet[2532]: I0130 13:13:38.762209 2532 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:13:38.771139 kubelet[2532]: I0130 13:13:38.771086 2532 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:13:38.771139 kubelet[2532]: I0130 13:13:38.771120 2532 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:13:38.771553 kubelet[2532]: I0130 13:13:38.771528 2532 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:13:38.772950 kubelet[2532]: I0130 13:13:38.772921 2532 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:13:38.776314 kubelet[2532]: I0130 13:13:38.776277 2532 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:13:38.783244 kubelet[2532]: E0130 13:13:38.783146 2532 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:13:38.783244 kubelet[2532]: I0130 13:13:38.783226 2532 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:13:38.785859 kubelet[2532]: I0130 13:13:38.785829 2532 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:13:38.786066 kubelet[2532]: I0130 13:13:38.786027 2532 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:13:38.786231 kubelet[2532]: I0130 13:13:38.786056 2532 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:13:38.786309 kubelet[2532]: I0130 13:13:38.786233 2532 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:13:38.786309 kubelet[2532]: I0130 13:13:38.786243 2532 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:13:38.786309 kubelet[2532]: I0130 13:13:38.786285 2532 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:13:38.786444 kubelet[2532]: I0130 13:13:38.786432 2532 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:13:38.786476 kubelet[2532]: I0130 13:13:38.786448 2532 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:13:38.786496 kubelet[2532]: I0130 13:13:38.786483 2532 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:13:38.786518 kubelet[2532]: I0130 13:13:38.786500 2532 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:13:38.787096 kubelet[2532]: I0130 13:13:38.787066 2532 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:13:38.790405 kubelet[2532]: I0130 13:13:38.787791 2532 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:13:38.790405 kubelet[2532]: I0130 13:13:38.788235 2532 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:13:38.790405 kubelet[2532]: I0130 13:13:38.788262 2532 server.go:1287] "Started kubelet" Jan 30 13:13:38.790405 kubelet[2532]: I0130 13:13:38.790157 2532 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:13:38.793745 kubelet[2532]: I0130 13:13:38.791705 2532 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:13:38.793745 kubelet[2532]: I0130 13:13:38.793022 2532 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:13:38.793745 kubelet[2532]: I0130 13:13:38.793157 2532 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:13:38.793745 kubelet[2532]: I0130 13:13:38.792741 2532 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:13:38.796377 kubelet[2532]: I0130 13:13:38.794283 2532 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:13:38.796377 kubelet[2532]: I0130 13:13:38.794408 2532 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:13:38.796377 kubelet[2532]: I0130 13:13:38.794578 2532 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:13:38.796377 kubelet[2532]: I0130 13:13:38.795390 2532 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:13:38.796377 kubelet[2532]: I0130 13:13:38.795594 2532 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:13:38.796377 kubelet[2532]: I0130 13:13:38.795777 2532 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:13:38.796377 kubelet[2532]: E0130 13:13:38.796308 2532 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:13:38.797322 kubelet[2532]: E0130 13:13:38.797299 2532 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:13:38.802158 kubelet[2532]: I0130 13:13:38.802127 2532 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:13:38.811114 kubelet[2532]: I0130 13:13:38.811068 2532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:13:38.813497 kubelet[2532]: I0130 13:13:38.812013 2532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:13:38.813497 kubelet[2532]: I0130 13:13:38.812072 2532 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:13:38.813497 kubelet[2532]: I0130 13:13:38.812093 2532 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:13:38.813497 kubelet[2532]: I0130 13:13:38.812100 2532 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:13:38.813497 kubelet[2532]: E0130 13:13:38.812154 2532 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:13:38.872602 kubelet[2532]: I0130 13:13:38.872361 2532 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:13:38.872736 kubelet[2532]: I0130 13:13:38.872661 2532 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:13:38.872787 kubelet[2532]: I0130 13:13:38.872731 2532 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:13:38.873147 kubelet[2532]: I0130 13:13:38.873122 2532 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:13:38.873197 kubelet[2532]: I0130 13:13:38.873144 2532 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:13:38.873197 kubelet[2532]: I0130 13:13:38.873166 2532 policy_none.go:49] "None policy: Start" Jan 30 13:13:38.873197 kubelet[2532]: I0130 13:13:38.873175 2532 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:13:38.873197 kubelet[2532]: I0130 13:13:38.873185 2532 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:13:38.873294 kubelet[2532]: I0130 13:13:38.873286 2532 state_mem.go:75] "Updated machine memory state" Jan 30 13:13:38.883115 kubelet[2532]: I0130 13:13:38.883081 2532 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:13:38.883282 kubelet[2532]: I0130 13:13:38.883253 2532 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:13:38.883324 kubelet[2532]: I0130 13:13:38.883271 2532 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:13:38.883583 kubelet[2532]: I0130 13:13:38.883560 2532 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:13:38.884852 kubelet[2532]: E0130 13:13:38.884742 2532 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:13:38.913278 kubelet[2532]: I0130 13:13:38.913234 2532 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:13:38.913411 kubelet[2532]: I0130 13:13:38.913324 2532 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:13:38.913636 kubelet[2532]: I0130 13:13:38.913234 2532 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:13:38.939304 kubelet[2532]: E0130 13:13:38.939243 2532 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:13:38.988469 kubelet[2532]: I0130 13:13:38.988431 2532 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:13:38.994060 kubelet[2532]: I0130 13:13:38.993765 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d66afb3e8401bdb68e36acb99e14b3f4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d66afb3e8401bdb68e36acb99e14b3f4\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:13:38.996228 kubelet[2532]: I0130 13:13:38.996085 2532 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 30 13:13:38.996228 kubelet[2532]: I0130 13:13:38.996160 2532 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 13:13:39.094166 kubelet[2532]: I0130 13:13:39.093885 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:13:39.094166 kubelet[2532]: I0130 13:13:39.093936 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:13:39.094166 kubelet[2532]: I0130 13:13:39.093965 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:13:39.094166 kubelet[2532]: I0130 13:13:39.094006 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d66afb3e8401bdb68e36acb99e14b3f4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d66afb3e8401bdb68e36acb99e14b3f4\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:13:39.094166 kubelet[2532]: I0130 13:13:39.094026 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d66afb3e8401bdb68e36acb99e14b3f4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d66afb3e8401bdb68e36acb99e14b3f4\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:13:39.094449 kubelet[2532]: I0130 13:13:39.094043 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:13:39.094449 kubelet[2532]: I0130 13:13:39.094062 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:13:39.094449 kubelet[2532]: I0130 13:13:39.094078 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:13:39.223010 kubelet[2532]: E0130 13:13:39.222980 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:39.239561 kubelet[2532]: E0130 13:13:39.239484 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:39.239706 kubelet[2532]: E0130 13:13:39.239580 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:39.300302 sudo[2567]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:13:39.301070 sudo[2567]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:13:39.729628 sudo[2567]: pam_unix(sudo:session): session closed for user root Jan 30 13:13:39.787270 kubelet[2532]: I0130 13:13:39.787202 2532 apiserver.go:52] "Watching apiserver" Jan 30 13:13:39.793647 kubelet[2532]: I0130 13:13:39.793608 2532 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:13:39.842379 kubelet[2532]: E0130 13:13:39.839932 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:39.842379 kubelet[2532]: I0130 13:13:39.840220 2532 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:13:39.842817 kubelet[2532]: I0130 13:13:39.842789 2532 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:13:39.849388 kubelet[2532]: E0130 13:13:39.849355 2532 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 13:13:39.850535 kubelet[2532]: E0130 13:13:39.850510 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:39.850926 kubelet[2532]: E0130 13:13:39.849708 2532 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:13:39.850926 kubelet[2532]: E0130 13:13:39.850749 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:39.868100 kubelet[2532]: I0130 13:13:39.867837 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.867798927 podStartE2EDuration="1.867798927s" podCreationTimestamp="2025-01-30 13:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:13:39.867593667 +0000 UTC m=+1.155166307" watchObservedRunningTime="2025-01-30 13:13:39.867798927 +0000 UTC m=+1.155371567" Jan 30 13:13:39.903918 kubelet[2532]: I0130 13:13:39.903698 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9036791050000001 podStartE2EDuration="1.903679105s" podCreationTimestamp="2025-01-30 13:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:13:39.878127356 +0000 UTC m=+1.165699996" watchObservedRunningTime="2025-01-30 13:13:39.903679105 +0000 UTC m=+1.191251745" Jan 30 13:13:39.903918 kubelet[2532]: I0130 13:13:39.903841 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.903836052 podStartE2EDuration="1.903836052s" podCreationTimestamp="2025-01-30 13:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:13:39.903630232 +0000 UTC m=+1.191202872" watchObservedRunningTime="2025-01-30 13:13:39.903836052 +0000 UTC m=+1.191408692" Jan 30 13:13:40.840803 kubelet[2532]: E0130 13:13:40.840749 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:40.841141 kubelet[2532]: E0130 13:13:40.841015 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:41.581476 sudo[1624]: pam_unix(sudo:session): session closed for user root Jan 30 13:13:41.582626 sshd[1623]: Connection closed by 10.0.0.1 port 44344 Jan 30 13:13:41.583169 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Jan 30 13:13:41.586099 systemd[1]: sshd@6-10.0.0.147:22-10.0.0.1:44344.service: Deactivated successfully. Jan 30 13:13:41.587692 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:13:41.587964 systemd[1]: session-7.scope: Consumed 7.805s CPU time, 156.1M memory peak, 0B memory swap peak. Jan 30 13:13:41.589189 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:13:41.590199 systemd-logind[1423]: Removed session 7. Jan 30 13:13:41.841755 kubelet[2532]: E0130 13:13:41.841650 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:43.477416 kubelet[2532]: E0130 13:13:43.477378 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:43.510533 kubelet[2532]: I0130 13:13:43.510496 2532 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:13:43.510872 containerd[1441]: time="2025-01-30T13:13:43.510832459Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:13:43.511201 kubelet[2532]: I0130 13:13:43.511018 2532 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:13:44.526184 kubelet[2532]: I0130 13:13:44.525693 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-hostproc\") pod \"cilium-2jv8q\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " pod="kube-system/cilium-2jv8q" Jan 30 13:13:44.526184 kubelet[2532]: I0130 13:13:44.525735 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-cilium-cgroup\") pod \"cilium-2jv8q\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " pod="kube-system/cilium-2jv8q" Jan 30 13:13:44.526184 kubelet[2532]: I0130 13:13:44.525771 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58247a37-c70f-452f-9833-a523de0361e9-cilium-config-path\") pod \"cilium-2jv8q\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " pod="kube-system/cilium-2jv8q" Jan 30 13:13:44.526184 kubelet[2532]: I0130 13:13:44.525793 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1a48478-2cd9-446e-8398-2cf84da7bdfa-xtables-lock\") pod \"kube-proxy-lcdc2\" (UID: \"c1a48478-2cd9-446e-8398-2cf84da7bdfa\") " pod="kube-system/kube-proxy-lcdc2" Jan 30 13:13:44.526184 kubelet[2532]: I0130 13:13:44.525813 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-lib-modules\") pod \"cilium-2jv8q\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " pod="kube-system/cilium-2jv8q" Jan 30 13:13:44.526184 kubelet[2532]: I0130 13:13:44.525836 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-xtables-lock\") pod \"cilium-2jv8q\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " pod="kube-system/cilium-2jv8q" Jan 30 13:13:44.526616 kubelet[2532]: I0130 13:13:44.525857 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-bpf-maps\") pod \"cilium-2jv8q\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " pod="kube-system/cilium-2jv8q" Jan 30 13:13:44.526616 kubelet[2532]: I0130 13:13:44.525876 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gct9c\" (UniqueName: \"kubernetes.io/projected/c1a48478-2cd9-446e-8398-2cf84da7bdfa-kube-api-access-gct9c\") pod \"kube-proxy-lcdc2\" (UID: \"c1a48478-2cd9-446e-8398-2cf84da7bdfa\") " pod="kube-system/kube-proxy-lcdc2" Jan 30 13:13:44.526616 kubelet[2532]: I0130 13:13:44.525895 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-cilium-run\") pod \"cilium-2jv8q\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " pod="kube-system/cilium-2jv8q" Jan 30 13:13:44.526616 kubelet[2532]: I0130 13:13:44.525913 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-cni-path\") pod \"cilium-2jv8q\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " pod="kube-system/cilium-2jv8q" Jan 30 13:13:44.526616 kubelet[2532]: I0130 13:13:44.525927 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-etc-cni-netd\") pod \"cilium-2jv8q\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " pod="kube-system/cilium-2jv8q" Jan 30 13:13:44.526616 kubelet[2532]: I0130 13:13:44.525946 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58247a37-c70f-452f-9833-a523de0361e9-clustermesh-secrets\") pod \"cilium-2jv8q\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " pod="kube-system/cilium-2jv8q" Jan 30 13:13:44.526731 kubelet[2532]: I0130 13:13:44.525966 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c1a48478-2cd9-446e-8398-2cf84da7bdfa-kube-proxy\") pod \"kube-proxy-lcdc2\" (UID: \"c1a48478-2cd9-446e-8398-2cf84da7bdfa\") " pod="kube-system/kube-proxy-lcdc2" Jan 30 13:13:44.526731 kubelet[2532]: I0130 13:13:44.525988 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1a48478-2cd9-446e-8398-2cf84da7bdfa-lib-modules\") pod \"kube-proxy-lcdc2\" (UID: \"c1a48478-2cd9-446e-8398-2cf84da7bdfa\") " pod="kube-system/kube-proxy-lcdc2" Jan 30 13:13:44.533094 systemd[1]: Created slice kubepods-besteffort-podc1a48478_2cd9_446e_8398_2cf84da7bdfa.slice - libcontainer container kubepods-besteffort-podc1a48478_2cd9_446e_8398_2cf84da7bdfa.slice. Jan 30 13:13:44.556306 systemd[1]: Created slice kubepods-burstable-pod58247a37_c70f_452f_9833_a523de0361e9.slice - libcontainer container kubepods-burstable-pod58247a37_c70f_452f_9833_a523de0361e9.slice. Jan 30 13:13:44.581881 systemd[1]: Created slice kubepods-besteffort-podcf43b0be_9bc2_457e_84d8_38272311301a.slice - libcontainer container kubepods-besteffort-podcf43b0be_9bc2_457e_84d8_38272311301a.slice. Jan 30 13:13:44.627392 kubelet[2532]: I0130 13:13:44.626808 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-host-proc-sys-kernel\") pod \"cilium-2jv8q\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " pod="kube-system/cilium-2jv8q" Jan 30 13:13:44.627392 kubelet[2532]: I0130 13:13:44.626861 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58247a37-c70f-452f-9833-a523de0361e9-hubble-tls\") pod \"cilium-2jv8q\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " pod="kube-system/cilium-2jv8q" Jan 30 13:13:44.627392 kubelet[2532]: I0130 13:13:44.626914 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-host-proc-sys-net\") pod \"cilium-2jv8q\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " pod="kube-system/cilium-2jv8q" Jan 30 13:13:44.627392 kubelet[2532]: I0130 13:13:44.626966 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm24h\" (UniqueName: \"kubernetes.io/projected/58247a37-c70f-452f-9833-a523de0361e9-kube-api-access-dm24h\") pod \"cilium-2jv8q\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " pod="kube-system/cilium-2jv8q" Jan 30 13:13:44.727305 kubelet[2532]: I0130 13:13:44.727251 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82x4f\" (UniqueName: \"kubernetes.io/projected/cf43b0be-9bc2-457e-84d8-38272311301a-kube-api-access-82x4f\") pod \"cilium-operator-6c4d7847fc-w2wn7\" (UID: \"cf43b0be-9bc2-457e-84d8-38272311301a\") " pod="kube-system/cilium-operator-6c4d7847fc-w2wn7" Jan 30 13:13:44.727523 kubelet[2532]: I0130 13:13:44.727509 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf43b0be-9bc2-457e-84d8-38272311301a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-w2wn7\" (UID: \"cf43b0be-9bc2-457e-84d8-38272311301a\") " pod="kube-system/cilium-operator-6c4d7847fc-w2wn7" Jan 30 13:13:44.846305 kubelet[2532]: E0130 13:13:44.846188 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:44.847138 containerd[1441]: time="2025-01-30T13:13:44.847099114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lcdc2,Uid:c1a48478-2cd9-446e-8398-2cf84da7bdfa,Namespace:kube-system,Attempt:0,}" Jan 30 13:13:44.861815 kubelet[2532]: E0130 13:13:44.861771 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:44.862399 containerd[1441]: time="2025-01-30T13:13:44.862338065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2jv8q,Uid:58247a37-c70f-452f-9833-a523de0361e9,Namespace:kube-system,Attempt:0,}" Jan 30 13:13:44.884910 kubelet[2532]: E0130 13:13:44.884818 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:44.885960 containerd[1441]: time="2025-01-30T13:13:44.885884284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w2wn7,Uid:cf43b0be-9bc2-457e-84d8-38272311301a,Namespace:kube-system,Attempt:0,}" Jan 30 13:13:44.910524 containerd[1441]: time="2025-01-30T13:13:44.910411484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:13:44.910524 containerd[1441]: time="2025-01-30T13:13:44.910484256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:13:44.910524 containerd[1441]: time="2025-01-30T13:13:44.910498066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:13:44.910833 containerd[1441]: time="2025-01-30T13:13:44.910628400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:13:44.918774 containerd[1441]: time="2025-01-30T13:13:44.918680004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:13:44.918774 containerd[1441]: time="2025-01-30T13:13:44.918745331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:13:44.918774 containerd[1441]: time="2025-01-30T13:13:44.918760822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:13:44.918913 containerd[1441]: time="2025-01-30T13:13:44.918844963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:13:44.923275 containerd[1441]: time="2025-01-30T13:13:44.923193316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:13:44.923353 containerd[1441]: time="2025-01-30T13:13:44.923298751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:13:44.923377 containerd[1441]: time="2025-01-30T13:13:44.923332936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:13:44.923504 containerd[1441]: time="2025-01-30T13:13:44.923472916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:13:44.934494 systemd[1]: Started cri-containerd-8103afaad06d4899ba7dbed27f039f3afa55c81130fc9acf1050cd58b16e2923.scope - libcontainer container 8103afaad06d4899ba7dbed27f039f3afa55c81130fc9acf1050cd58b16e2923. Jan 30 13:13:44.935794 systemd[1]: Started cri-containerd-c5683366a964de43541ac99f3bacd8239d34b0348e85784cf1c988b7f662987d.scope - libcontainer container c5683366a964de43541ac99f3bacd8239d34b0348e85784cf1c988b7f662987d. Jan 30 13:13:44.938683 systemd[1]: Started cri-containerd-db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a.scope - libcontainer container db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a. Jan 30 13:13:44.967887 containerd[1441]: time="2025-01-30T13:13:44.967773714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2jv8q,Uid:58247a37-c70f-452f-9833-a523de0361e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a\"" Jan 30 13:13:44.968494 kubelet[2532]: E0130 13:13:44.968468 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:44.970275 containerd[1441]: time="2025-01-30T13:13:44.970071439Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:13:44.971038 containerd[1441]: time="2025-01-30T13:13:44.970977248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lcdc2,Uid:c1a48478-2cd9-446e-8398-2cf84da7bdfa,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5683366a964de43541ac99f3bacd8239d34b0348e85784cf1c988b7f662987d\"" Jan 30 13:13:44.972607 kubelet[2532]: E0130 13:13:44.972590 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:44.976262 containerd[1441]: time="2025-01-30T13:13:44.976206712Z" level=info msg="CreateContainer within sandbox \"c5683366a964de43541ac99f3bacd8239d34b0348e85784cf1c988b7f662987d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:13:44.987597 containerd[1441]: time="2025-01-30T13:13:44.987562282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w2wn7,Uid:cf43b0be-9bc2-457e-84d8-38272311301a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8103afaad06d4899ba7dbed27f039f3afa55c81130fc9acf1050cd58b16e2923\"" Jan 30 13:13:44.988385 kubelet[2532]: E0130 13:13:44.988217 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:44.995170 containerd[1441]: time="2025-01-30T13:13:44.995123416Z" level=info msg="CreateContainer within sandbox \"c5683366a964de43541ac99f3bacd8239d34b0348e85784cf1c988b7f662987d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6b6e1eee78adf99bbcafe046a21b023ebfdb9dcbe39d3cb9472e461dd8bad6ac\"" Jan 30 13:13:44.995611 containerd[1441]: time="2025-01-30T13:13:44.995587268Z" level=info msg="StartContainer for \"6b6e1eee78adf99bbcafe046a21b023ebfdb9dcbe39d3cb9472e461dd8bad6ac\"" Jan 30 13:13:45.031521 systemd[1]: Started cri-containerd-6b6e1eee78adf99bbcafe046a21b023ebfdb9dcbe39d3cb9472e461dd8bad6ac.scope - libcontainer container 6b6e1eee78adf99bbcafe046a21b023ebfdb9dcbe39d3cb9472e461dd8bad6ac. Jan 30 13:13:45.079722 containerd[1441]: time="2025-01-30T13:13:45.079604464Z" level=info msg="StartContainer for \"6b6e1eee78adf99bbcafe046a21b023ebfdb9dcbe39d3cb9472e461dd8bad6ac\" returns successfully" Jan 30 13:13:45.581782 kubelet[2532]: E0130 13:13:45.581755 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:45.853459 kubelet[2532]: E0130 13:13:45.852997 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:45.853459 kubelet[2532]: E0130 13:13:45.853049 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:45.879065 kubelet[2532]: I0130 13:13:45.879013 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lcdc2" podStartSLOduration=1.87899721 podStartE2EDuration="1.87899721s" podCreationTimestamp="2025-01-30 13:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:13:45.877767057 +0000 UTC m=+7.165339697" watchObservedRunningTime="2025-01-30 13:13:45.87899721 +0000 UTC m=+7.166569850" Jan 30 13:13:51.696424 kubelet[2532]: E0130 13:13:51.696203 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:52.886973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844866261.mount: Deactivated successfully. Jan 30 13:13:53.102928 update_engine[1426]: I20250130 13:13:53.102857 1426 update_attempter.cc:509] Updating boot flags... Jan 30 13:13:53.137385 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2919) Jan 30 13:13:53.205456 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2923) Jan 30 13:13:53.507606 kubelet[2532]: E0130 13:13:53.507565 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:54.314268 containerd[1441]: time="2025-01-30T13:13:54.314219376Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:54.314771 containerd[1441]: time="2025-01-30T13:13:54.314661722Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 13:13:54.315668 containerd[1441]: time="2025-01-30T13:13:54.315639413Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:54.317463 containerd[1441]: time="2025-01-30T13:13:54.317423203Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.347313579s" Jan 30 13:13:54.317522 containerd[1441]: time="2025-01-30T13:13:54.317464060Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 13:13:54.319959 containerd[1441]: time="2025-01-30T13:13:54.319902285Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:13:54.325020 containerd[1441]: time="2025-01-30T13:13:54.324981421Z" level=info msg="CreateContainer within sandbox \"db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:13:54.353608 containerd[1441]: time="2025-01-30T13:13:54.353559717Z" level=info msg="CreateContainer within sandbox \"db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261\"" Jan 30 13:13:54.354422 containerd[1441]: time="2025-01-30T13:13:54.354116391Z" level=info msg="StartContainer for \"4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261\"" Jan 30 13:13:54.387562 systemd[1]: Started cri-containerd-4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261.scope - libcontainer container 4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261. Jan 30 13:13:54.415900 containerd[1441]: time="2025-01-30T13:13:54.415835182Z" level=info msg="StartContainer for \"4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261\" returns successfully" Jan 30 13:13:54.454440 systemd[1]: cri-containerd-4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261.scope: Deactivated successfully. Jan 30 13:13:54.670361 containerd[1441]: time="2025-01-30T13:13:54.665254816Z" level=info msg="shim disconnected" id=4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261 namespace=k8s.io Jan 30 13:13:54.670642 containerd[1441]: time="2025-01-30T13:13:54.670379531Z" level=warning msg="cleaning up after shim disconnected" id=4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261 namespace=k8s.io Jan 30 13:13:54.670642 containerd[1441]: time="2025-01-30T13:13:54.670400580Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:13:54.881275 kubelet[2532]: E0130 13:13:54.881237 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:54.887040 containerd[1441]: time="2025-01-30T13:13:54.886878603Z" level=info msg="CreateContainer within sandbox \"db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:13:54.903876 containerd[1441]: time="2025-01-30T13:13:54.903813684Z" level=info msg="CreateContainer within sandbox \"db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a\"" Jan 30 13:13:54.906725 containerd[1441]: time="2025-01-30T13:13:54.906675567Z" level=info msg="StartContainer for \"02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a\"" Jan 30 13:13:54.960577 systemd[1]: Started cri-containerd-02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a.scope - libcontainer container 02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a. Jan 30 13:13:54.985110 containerd[1441]: time="2025-01-30T13:13:54.985053803Z" level=info msg="StartContainer for \"02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a\" returns successfully" Jan 30 13:13:55.015399 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:13:55.016142 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:13:55.016226 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:13:55.024730 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:13:55.024916 systemd[1]: cri-containerd-02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a.scope: Deactivated successfully. Jan 30 13:13:55.050013 containerd[1441]: time="2025-01-30T13:13:55.049948128Z" level=info msg="shim disconnected" id=02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a namespace=k8s.io Jan 30 13:13:55.050013 containerd[1441]: time="2025-01-30T13:13:55.050002750Z" level=warning msg="cleaning up after shim disconnected" id=02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a namespace=k8s.io Jan 30 13:13:55.050013 containerd[1441]: time="2025-01-30T13:13:55.050010873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:13:55.052015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:13:55.345995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261-rootfs.mount: Deactivated successfully. Jan 30 13:13:55.556256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2381133805.mount: Deactivated successfully. Jan 30 13:13:55.884443 kubelet[2532]: E0130 13:13:55.884233 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:55.887388 containerd[1441]: time="2025-01-30T13:13:55.887322799Z" level=info msg="CreateContainer within sandbox \"db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:13:55.907029 containerd[1441]: time="2025-01-30T13:13:55.906977903Z" level=info msg="CreateContainer within sandbox \"db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636\"" Jan 30 13:13:55.908382 containerd[1441]: time="2025-01-30T13:13:55.908097551Z" level=info msg="StartContainer for \"b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636\"" Jan 30 13:13:55.947553 systemd[1]: Started cri-containerd-b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636.scope - libcontainer container b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636. Jan 30 13:13:55.978154 containerd[1441]: time="2025-01-30T13:13:55.978097878Z" level=info msg="StartContainer for \"b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636\" returns successfully" Jan 30 13:13:55.990233 systemd[1]: cri-containerd-b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636.scope: Deactivated successfully. Jan 30 13:13:56.016016 containerd[1441]: time="2025-01-30T13:13:56.015785040Z" level=info msg="shim disconnected" id=b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636 namespace=k8s.io Jan 30 13:13:56.016016 containerd[1441]: time="2025-01-30T13:13:56.015851386Z" level=warning msg="cleaning up after shim disconnected" id=b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636 namespace=k8s.io Jan 30 13:13:56.016016 containerd[1441]: time="2025-01-30T13:13:56.015860389Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:13:56.345671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636-rootfs.mount: Deactivated successfully. Jan 30 13:13:56.733582 containerd[1441]: time="2025-01-30T13:13:56.733444704Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:56.734705 containerd[1441]: time="2025-01-30T13:13:56.734650564Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 13:13:56.735797 containerd[1441]: time="2025-01-30T13:13:56.735740859Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:13:56.737299 containerd[1441]: time="2025-01-30T13:13:56.737262599Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.41730317s" Jan 30 13:13:56.737562 containerd[1441]: time="2025-01-30T13:13:56.737443548Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 13:13:56.749289 containerd[1441]: time="2025-01-30T13:13:56.749226677Z" level=info msg="CreateContainer within sandbox \"8103afaad06d4899ba7dbed27f039f3afa55c81130fc9acf1050cd58b16e2923\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:13:56.759515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4016511863.mount: Deactivated successfully. Jan 30 13:13:56.761359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2133743726.mount: Deactivated successfully. Jan 30 13:13:56.764158 containerd[1441]: time="2025-01-30T13:13:56.764037040Z" level=info msg="CreateContainer within sandbox \"8103afaad06d4899ba7dbed27f039f3afa55c81130fc9acf1050cd58b16e2923\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8\"" Jan 30 13:13:56.764501 containerd[1441]: time="2025-01-30T13:13:56.764478248Z" level=info msg="StartContainer for \"f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8\"" Jan 30 13:13:56.793541 systemd[1]: Started cri-containerd-f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8.scope - libcontainer container f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8. Jan 30 13:13:56.820614 containerd[1441]: time="2025-01-30T13:13:56.820561896Z" level=info msg="StartContainer for \"f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8\" returns successfully" Jan 30 13:13:56.891326 kubelet[2532]: E0130 13:13:56.891235 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:56.896528 kubelet[2532]: E0130 13:13:56.896044 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:56.898584 containerd[1441]: time="2025-01-30T13:13:56.898531562Z" level=info msg="CreateContainer within sandbox \"db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:13:56.930415 containerd[1441]: time="2025-01-30T13:13:56.930370172Z" level=info msg="CreateContainer within sandbox \"db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39\"" Jan 30 13:13:56.930993 kubelet[2532]: I0130 13:13:56.930936 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-w2wn7" podStartSLOduration=1.181374137 podStartE2EDuration="12.930917541s" podCreationTimestamp="2025-01-30 13:13:44 +0000 UTC" firstStartedPulling="2025-01-30 13:13:44.989035897 +0000 UTC m=+6.276608537" lastFinishedPulling="2025-01-30 13:13:56.738579301 +0000 UTC m=+18.026151941" observedRunningTime="2025-01-30 13:13:56.90225366 +0000 UTC m=+18.189826300" watchObservedRunningTime="2025-01-30 13:13:56.930917541 +0000 UTC m=+18.218490181" Jan 30 13:13:56.931485 containerd[1441]: time="2025-01-30T13:13:56.931373194Z" level=info msg="StartContainer for \"a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39\"" Jan 30 13:13:56.963577 systemd[1]: Started cri-containerd-a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39.scope - libcontainer container a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39. Jan 30 13:13:57.008010 systemd[1]: cri-containerd-a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39.scope: Deactivated successfully. Jan 30 13:13:57.012274 containerd[1441]: time="2025-01-30T13:13:57.012146615Z" level=info msg="StartContainer for \"a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39\" returns successfully" Jan 30 13:13:57.076720 containerd[1441]: time="2025-01-30T13:13:57.076500621Z" level=info msg="shim disconnected" id=a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39 namespace=k8s.io Jan 30 13:13:57.076720 containerd[1441]: time="2025-01-30T13:13:57.076565564Z" level=warning msg="cleaning up after shim disconnected" id=a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39 namespace=k8s.io Jan 30 13:13:57.076720 containerd[1441]: time="2025-01-30T13:13:57.076574968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:13:57.905484 kubelet[2532]: E0130 13:13:57.905441 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:57.905973 kubelet[2532]: E0130 13:13:57.905617 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:57.908883 containerd[1441]: time="2025-01-30T13:13:57.908828106Z" level=info msg="CreateContainer within sandbox \"db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:13:57.931557 containerd[1441]: time="2025-01-30T13:13:57.931489174Z" level=info msg="CreateContainer within sandbox \"db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff\"" Jan 30 13:13:57.932261 containerd[1441]: time="2025-01-30T13:13:57.932233684Z" level=info msg="StartContainer for \"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff\"" Jan 30 13:13:57.962567 systemd[1]: Started cri-containerd-55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff.scope - libcontainer container 55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff. Jan 30 13:13:57.997190 containerd[1441]: time="2025-01-30T13:13:57.996906926Z" level=info msg="StartContainer for \"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff\" returns successfully" Jan 30 13:13:58.126978 kubelet[2532]: I0130 13:13:58.126936 2532 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 13:13:58.178327 systemd[1]: Created slice kubepods-burstable-podd802a47f_ac13_43da_9ae1_ef16e937ee73.slice - libcontainer container kubepods-burstable-podd802a47f_ac13_43da_9ae1_ef16e937ee73.slice. Jan 30 13:13:58.183775 systemd[1]: Created slice kubepods-burstable-podb816bd35_9860_44af_b252_c648b075b5b1.slice - libcontainer container kubepods-burstable-podb816bd35_9860_44af_b252_c648b075b5b1.slice. Jan 30 13:13:58.228304 kubelet[2532]: I0130 13:13:58.228249 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b816bd35-9860-44af-b252-c648b075b5b1-config-volume\") pod \"coredns-668d6bf9bc-c8fv7\" (UID: \"b816bd35-9860-44af-b252-c648b075b5b1\") " pod="kube-system/coredns-668d6bf9bc-c8fv7" Jan 30 13:13:58.228432 kubelet[2532]: I0130 13:13:58.228329 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d802a47f-ac13-43da-9ae1-ef16e937ee73-config-volume\") pod \"coredns-668d6bf9bc-2ljn7\" (UID: \"d802a47f-ac13-43da-9ae1-ef16e937ee73\") " pod="kube-system/coredns-668d6bf9bc-2ljn7" Jan 30 13:13:58.228432 kubelet[2532]: I0130 13:13:58.228370 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59gbm\" (UniqueName: \"kubernetes.io/projected/d802a47f-ac13-43da-9ae1-ef16e937ee73-kube-api-access-59gbm\") pod \"coredns-668d6bf9bc-2ljn7\" (UID: \"d802a47f-ac13-43da-9ae1-ef16e937ee73\") " pod="kube-system/coredns-668d6bf9bc-2ljn7" Jan 30 13:13:58.228432 kubelet[2532]: I0130 13:13:58.228402 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hst5l\" (UniqueName: \"kubernetes.io/projected/b816bd35-9860-44af-b252-c648b075b5b1-kube-api-access-hst5l\") pod \"coredns-668d6bf9bc-c8fv7\" (UID: \"b816bd35-9860-44af-b252-c648b075b5b1\") " pod="kube-system/coredns-668d6bf9bc-c8fv7" Jan 30 13:13:58.484031 kubelet[2532]: E0130 13:13:58.483336 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:58.485322 containerd[1441]: time="2025-01-30T13:13:58.484534287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2ljn7,Uid:d802a47f-ac13-43da-9ae1-ef16e937ee73,Namespace:kube-system,Attempt:0,}" Jan 30 13:13:58.488404 kubelet[2532]: E0130 13:13:58.488079 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:58.489067 containerd[1441]: time="2025-01-30T13:13:58.489032965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c8fv7,Uid:b816bd35-9860-44af-b252-c648b075b5b1,Namespace:kube-system,Attempt:0,}" Jan 30 13:13:58.909361 kubelet[2532]: E0130 13:13:58.909317 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:13:58.946147 kubelet[2532]: I0130 13:13:58.944630 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2jv8q" podStartSLOduration=5.594395185 podStartE2EDuration="14.944610529s" podCreationTimestamp="2025-01-30 13:13:44 +0000 UTC" firstStartedPulling="2025-01-30 13:13:44.969473091 +0000 UTC m=+6.257045691" lastFinishedPulling="2025-01-30 13:13:54.319688435 +0000 UTC m=+15.607261035" observedRunningTime="2025-01-30 13:13:58.943443084 +0000 UTC m=+20.231015724" watchObservedRunningTime="2025-01-30 13:13:58.944610529 +0000 UTC m=+20.232183169" Jan 30 13:13:59.911508 kubelet[2532]: E0130 13:13:59.911425 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:00.302951 systemd-networkd[1374]: cilium_host: Link UP Jan 30 13:14:00.303070 systemd-networkd[1374]: cilium_net: Link UP Jan 30 13:14:00.303191 systemd-networkd[1374]: cilium_net: Gained carrier Jan 30 13:14:00.303304 systemd-networkd[1374]: cilium_host: Gained carrier Jan 30 13:14:00.400406 systemd-networkd[1374]: cilium_vxlan: Link UP Jan 30 13:14:00.400976 systemd-networkd[1374]: cilium_vxlan: Gained carrier Jan 30 13:14:00.483523 systemd-networkd[1374]: cilium_host: Gained IPv6LL Jan 30 13:14:00.732371 kernel: NET: Registered PF_ALG protocol family Jan 30 13:14:00.914231 kubelet[2532]: E0130 13:14:00.914153 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:01.164500 systemd-networkd[1374]: cilium_net: Gained IPv6LL Jan 30 13:14:01.374231 systemd-networkd[1374]: lxc_health: Link UP Jan 30 13:14:01.385108 systemd-networkd[1374]: lxc_health: Gained carrier Jan 30 13:14:01.548631 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL Jan 30 13:14:01.640373 systemd-networkd[1374]: lxc7241297cb724: Link UP Jan 30 13:14:01.645808 systemd-networkd[1374]: lxcec65157e2f91: Link UP Jan 30 13:14:01.660365 kernel: eth0: renamed from tmp43f9f Jan 30 13:14:01.667974 kernel: eth0: renamed from tmpf410c Jan 30 13:14:01.675316 systemd-networkd[1374]: lxcec65157e2f91: Gained carrier Jan 30 13:14:01.676746 systemd-networkd[1374]: lxc7241297cb724: Gained carrier Jan 30 13:14:02.443483 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jan 30 13:14:02.763496 systemd-networkd[1374]: lxc7241297cb724: Gained IPv6LL Jan 30 13:14:02.879705 kubelet[2532]: E0130 13:14:02.879660 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:02.920953 kubelet[2532]: E0130 13:14:02.920404 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:03.276489 systemd-networkd[1374]: lxcec65157e2f91: Gained IPv6LL Jan 30 13:14:03.931218 kubelet[2532]: E0130 13:14:03.931173 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:04.991720 systemd[1]: Started sshd@7-10.0.0.147:22-10.0.0.1:55226.service - OpenSSH per-connection server daemon (10.0.0.1:55226). Jan 30 13:14:05.053236 sshd[3762]: Accepted publickey for core from 10.0.0.1 port 55226 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:05.054695 sshd-session[3762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:05.059144 systemd-logind[1423]: New session 8 of user core. Jan 30 13:14:05.066520 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:14:05.206461 sshd[3764]: Connection closed by 10.0.0.1 port 55226 Jan 30 13:14:05.206368 sshd-session[3762]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:05.210822 systemd[1]: sshd@7-10.0.0.147:22-10.0.0.1:55226.service: Deactivated successfully. Jan 30 13:14:05.214282 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:14:05.215120 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:14:05.216073 systemd-logind[1423]: Removed session 8. Jan 30 13:14:05.413376 containerd[1441]: time="2025-01-30T13:14:05.413272524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:14:05.413376 containerd[1441]: time="2025-01-30T13:14:05.413330259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:14:05.414134 containerd[1441]: time="2025-01-30T13:14:05.413869957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:05.414134 containerd[1441]: time="2025-01-30T13:14:05.413999149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:05.420079 containerd[1441]: time="2025-01-30T13:14:05.415619962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:14:05.420079 containerd[1441]: time="2025-01-30T13:14:05.415687019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:14:05.420079 containerd[1441]: time="2025-01-30T13:14:05.415702503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:05.420079 containerd[1441]: time="2025-01-30T13:14:05.415805690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:05.444549 systemd[1]: Started cri-containerd-43f9f8645eae18de2088feecb964c3dcabb6d4f69e434780330d6bb8e210876f.scope - libcontainer container 43f9f8645eae18de2088feecb964c3dcabb6d4f69e434780330d6bb8e210876f. Jan 30 13:14:05.447737 systemd[1]: Started cri-containerd-f410c000587520580e822e3b664f9808111386f5f076f8fc3385f1c42542ab87.scope - libcontainer container f410c000587520580e822e3b664f9808111386f5f076f8fc3385f1c42542ab87. Jan 30 13:14:05.459627 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:14:05.460812 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:14:05.481968 containerd[1441]: time="2025-01-30T13:14:05.481676309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c8fv7,Uid:b816bd35-9860-44af-b252-c648b075b5b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f410c000587520580e822e3b664f9808111386f5f076f8fc3385f1c42542ab87\"" Jan 30 13:14:05.482534 kubelet[2532]: E0130 13:14:05.482512 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:05.484384 containerd[1441]: time="2025-01-30T13:14:05.484272971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2ljn7,Uid:d802a47f-ac13-43da-9ae1-ef16e937ee73,Namespace:kube-system,Attempt:0,} returns sandbox id \"43f9f8645eae18de2088feecb964c3dcabb6d4f69e434780330d6bb8e210876f\"" Jan 30 13:14:05.485208 kubelet[2532]: E0130 13:14:05.485179 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:05.486808 containerd[1441]: time="2025-01-30T13:14:05.486240872Z" level=info msg="CreateContainer within sandbox \"f410c000587520580e822e3b664f9808111386f5f076f8fc3385f1c42542ab87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:14:05.489519 containerd[1441]: time="2025-01-30T13:14:05.489375991Z" level=info msg="CreateContainer within sandbox \"43f9f8645eae18de2088feecb964c3dcabb6d4f69e434780330d6bb8e210876f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:14:05.503058 containerd[1441]: time="2025-01-30T13:14:05.503005102Z" level=info msg="CreateContainer within sandbox \"43f9f8645eae18de2088feecb964c3dcabb6d4f69e434780330d6bb8e210876f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c075fc67a2d55c3a6d6e3744e6bda382745dac2a4450ecadd1363ea85c72da7\"" Jan 30 13:14:05.506127 containerd[1441]: time="2025-01-30T13:14:05.506077445Z" level=info msg="StartContainer for \"4c075fc67a2d55c3a6d6e3744e6bda382745dac2a4450ecadd1363ea85c72da7\"" Jan 30 13:14:05.512040 containerd[1441]: time="2025-01-30T13:14:05.511988871Z" level=info msg="CreateContainer within sandbox \"f410c000587520580e822e3b664f9808111386f5f076f8fc3385f1c42542ab87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95b2c073f2424002cc571ba6cd93d1126a813d026fa27aa9c14fa8cbe918510e\"" Jan 30 13:14:05.512741 containerd[1441]: time="2025-01-30T13:14:05.512498521Z" level=info msg="StartContainer for \"95b2c073f2424002cc571ba6cd93d1126a813d026fa27aa9c14fa8cbe918510e\"" Jan 30 13:14:05.537554 systemd[1]: Started cri-containerd-4c075fc67a2d55c3a6d6e3744e6bda382745dac2a4450ecadd1363ea85c72da7.scope - libcontainer container 4c075fc67a2d55c3a6d6e3744e6bda382745dac2a4450ecadd1363ea85c72da7. Jan 30 13:14:05.540979 systemd[1]: Started cri-containerd-95b2c073f2424002cc571ba6cd93d1126a813d026fa27aa9c14fa8cbe918510e.scope - libcontainer container 95b2c073f2424002cc571ba6cd93d1126a813d026fa27aa9c14fa8cbe918510e. Jan 30 13:14:05.570634 containerd[1441]: time="2025-01-30T13:14:05.570518980Z" level=info msg="StartContainer for \"95b2c073f2424002cc571ba6cd93d1126a813d026fa27aa9c14fa8cbe918510e\" returns successfully" Jan 30 13:14:05.580574 containerd[1441]: time="2025-01-30T13:14:05.577399533Z" level=info msg="StartContainer for \"4c075fc67a2d55c3a6d6e3744e6bda382745dac2a4450ecadd1363ea85c72da7\" returns successfully" Jan 30 13:14:05.936332 kubelet[2532]: E0130 13:14:05.936268 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:05.942410 kubelet[2532]: E0130 13:14:05.939333 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:05.959927 kubelet[2532]: I0130 13:14:05.959845 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2ljn7" podStartSLOduration=21.95982583 podStartE2EDuration="21.95982583s" podCreationTimestamp="2025-01-30 13:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:14:05.959769656 +0000 UTC m=+27.247342296" watchObservedRunningTime="2025-01-30 13:14:05.95982583 +0000 UTC m=+27.247398430" Jan 30 13:14:05.960137 kubelet[2532]: I0130 13:14:05.959963 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c8fv7" podStartSLOduration=21.959957103 podStartE2EDuration="21.959957103s" podCreationTimestamp="2025-01-30 13:13:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:14:05.947865783 +0000 UTC m=+27.235438423" watchObservedRunningTime="2025-01-30 13:14:05.959957103 +0000 UTC m=+27.247529783" Jan 30 13:14:06.941853 kubelet[2532]: E0130 13:14:06.941464 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:06.941853 kubelet[2532]: E0130 13:14:06.941578 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:07.943330 kubelet[2532]: E0130 13:14:07.943234 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:10.223270 systemd[1]: Started sshd@8-10.0.0.147:22-10.0.0.1:55242.service - OpenSSH per-connection server daemon (10.0.0.1:55242). Jan 30 13:14:10.274371 sshd[3947]: Accepted publickey for core from 10.0.0.1 port 55242 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:10.277098 sshd-session[3947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:10.283309 systemd-logind[1423]: New session 9 of user core. Jan 30 13:14:10.290559 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:14:10.409257 sshd[3949]: Connection closed by 10.0.0.1 port 55242 Jan 30 13:14:10.409620 sshd-session[3947]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:10.412222 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:14:10.412972 systemd[1]: sshd@8-10.0.0.147:22-10.0.0.1:55242.service: Deactivated successfully. Jan 30 13:14:10.416466 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:14:10.418158 systemd-logind[1423]: Removed session 9. Jan 30 13:14:15.422100 systemd[1]: Started sshd@9-10.0.0.147:22-10.0.0.1:33206.service - OpenSSH per-connection server daemon (10.0.0.1:33206). Jan 30 13:14:15.467225 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 33206 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:15.468443 sshd-session[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:15.472795 systemd-logind[1423]: New session 10 of user core. Jan 30 13:14:15.482539 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:14:15.620066 sshd[3968]: Connection closed by 10.0.0.1 port 33206 Jan 30 13:14:15.619901 sshd-session[3966]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:15.623520 systemd[1]: sshd@9-10.0.0.147:22-10.0.0.1:33206.service: Deactivated successfully. Jan 30 13:14:15.625761 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:14:15.626700 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:14:15.627502 systemd-logind[1423]: Removed session 10. Jan 30 13:14:20.630376 systemd[1]: Started sshd@10-10.0.0.147:22-10.0.0.1:33210.service - OpenSSH per-connection server daemon (10.0.0.1:33210). Jan 30 13:14:20.688487 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 33210 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:20.689773 sshd-session[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:20.694972 systemd-logind[1423]: New session 11 of user core. Jan 30 13:14:20.707609 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:14:20.858208 sshd[3983]: Connection closed by 10.0.0.1 port 33210 Jan 30 13:14:20.860678 sshd-session[3981]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:20.869361 systemd[1]: sshd@10-10.0.0.147:22-10.0.0.1:33210.service: Deactivated successfully. Jan 30 13:14:20.872335 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:14:20.878293 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:14:20.887702 systemd[1]: Started sshd@11-10.0.0.147:22-10.0.0.1:33218.service - OpenSSH per-connection server daemon (10.0.0.1:33218). Jan 30 13:14:20.890320 systemd-logind[1423]: Removed session 11. Jan 30 13:14:20.927247 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 33218 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:20.929118 sshd-session[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:20.933309 systemd-logind[1423]: New session 12 of user core. Jan 30 13:14:20.940527 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:14:21.111482 sshd[3998]: Connection closed by 10.0.0.1 port 33218 Jan 30 13:14:21.110431 sshd-session[3996]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:21.118251 systemd[1]: sshd@11-10.0.0.147:22-10.0.0.1:33218.service: Deactivated successfully. Jan 30 13:14:21.121214 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:14:21.123185 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:14:21.128856 systemd[1]: Started sshd@12-10.0.0.147:22-10.0.0.1:33232.service - OpenSSH per-connection server daemon (10.0.0.1:33232). Jan 30 13:14:21.132055 systemd-logind[1423]: Removed session 12. Jan 30 13:14:21.177069 sshd[4009]: Accepted publickey for core from 10.0.0.1 port 33232 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:21.178739 sshd-session[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:21.182976 systemd-logind[1423]: New session 13 of user core. Jan 30 13:14:21.195567 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:14:21.319628 sshd[4012]: Connection closed by 10.0.0.1 port 33232 Jan 30 13:14:21.320066 sshd-session[4009]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:21.324687 systemd[1]: sshd@12-10.0.0.147:22-10.0.0.1:33232.service: Deactivated successfully. Jan 30 13:14:21.327295 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:14:21.330202 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:14:21.331410 systemd-logind[1423]: Removed session 13. Jan 30 13:14:26.332329 systemd[1]: Started sshd@13-10.0.0.147:22-10.0.0.1:38608.service - OpenSSH per-connection server daemon (10.0.0.1:38608). Jan 30 13:14:26.381177 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 38608 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:26.382656 sshd-session[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:26.387253 systemd-logind[1423]: New session 14 of user core. Jan 30 13:14:26.398540 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:14:26.518791 sshd[4026]: Connection closed by 10.0.0.1 port 38608 Jan 30 13:14:26.519266 sshd-session[4024]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:26.522292 systemd[1]: sshd@13-10.0.0.147:22-10.0.0.1:38608.service: Deactivated successfully. Jan 30 13:14:26.524000 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:14:26.525532 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:14:26.526503 systemd-logind[1423]: Removed session 14. Jan 30 13:14:31.539470 systemd[1]: Started sshd@14-10.0.0.147:22-10.0.0.1:38636.service - OpenSSH per-connection server daemon (10.0.0.1:38636). Jan 30 13:14:31.584246 sshd[4038]: Accepted publickey for core from 10.0.0.1 port 38636 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:31.584695 sshd-session[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:31.591569 systemd-logind[1423]: New session 15 of user core. Jan 30 13:14:31.604574 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:14:31.747217 sshd[4040]: Connection closed by 10.0.0.1 port 38636 Jan 30 13:14:31.747831 sshd-session[4038]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:31.762393 systemd[1]: sshd@14-10.0.0.147:22-10.0.0.1:38636.service: Deactivated successfully. Jan 30 13:14:31.764419 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:14:31.767687 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:14:31.769275 systemd[1]: Started sshd@15-10.0.0.147:22-10.0.0.1:38652.service - OpenSSH per-connection server daemon (10.0.0.1:38652). Jan 30 13:14:31.770157 systemd-logind[1423]: Removed session 15. Jan 30 13:14:31.811906 sshd[4052]: Accepted publickey for core from 10.0.0.1 port 38652 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:31.813445 sshd-session[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:31.817755 systemd-logind[1423]: New session 16 of user core. Jan 30 13:14:31.827565 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:14:32.053648 sshd[4054]: Connection closed by 10.0.0.1 port 38652 Jan 30 13:14:32.053424 sshd-session[4052]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:32.067967 systemd[1]: sshd@15-10.0.0.147:22-10.0.0.1:38652.service: Deactivated successfully. Jan 30 13:14:32.069709 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:14:32.071135 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:14:32.081718 systemd[1]: Started sshd@16-10.0.0.147:22-10.0.0.1:38656.service - OpenSSH per-connection server daemon (10.0.0.1:38656). Jan 30 13:14:32.082903 systemd-logind[1423]: Removed session 16. Jan 30 13:14:32.126948 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 38656 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:32.128562 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:32.134438 systemd-logind[1423]: New session 17 of user core. Jan 30 13:14:32.152639 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:14:32.981097 sshd[4066]: Connection closed by 10.0.0.1 port 38656 Jan 30 13:14:32.980080 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:32.986976 systemd[1]: sshd@16-10.0.0.147:22-10.0.0.1:38656.service: Deactivated successfully. Jan 30 13:14:32.988606 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:14:32.996627 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:14:33.006204 systemd[1]: Started sshd@17-10.0.0.147:22-10.0.0.1:56662.service - OpenSSH per-connection server daemon (10.0.0.1:56662). Jan 30 13:14:33.007666 systemd-logind[1423]: Removed session 17. Jan 30 13:14:33.047677 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 56662 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:33.049149 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:33.056062 systemd-logind[1423]: New session 18 of user core. Jan 30 13:14:33.067558 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:14:33.302044 sshd[4086]: Connection closed by 10.0.0.1 port 56662 Jan 30 13:14:33.302644 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:33.315905 systemd[1]: sshd@17-10.0.0.147:22-10.0.0.1:56662.service: Deactivated successfully. Jan 30 13:14:33.319400 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:14:33.323164 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:14:33.331838 systemd[1]: Started sshd@18-10.0.0.147:22-10.0.0.1:56666.service - OpenSSH per-connection server daemon (10.0.0.1:56666). Jan 30 13:14:33.333282 systemd-logind[1423]: Removed session 18. Jan 30 13:14:33.370811 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 56666 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:33.372266 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:33.376936 systemd-logind[1423]: New session 19 of user core. Jan 30 13:14:33.385587 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:14:33.507867 sshd[4099]: Connection closed by 10.0.0.1 port 56666 Jan 30 13:14:33.508603 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:33.512313 systemd[1]: sshd@18-10.0.0.147:22-10.0.0.1:56666.service: Deactivated successfully. Jan 30 13:14:33.514162 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:14:33.514931 systemd-logind[1423]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:14:33.516117 systemd-logind[1423]: Removed session 19. Jan 30 13:14:38.519697 systemd[1]: Started sshd@19-10.0.0.147:22-10.0.0.1:56674.service - OpenSSH per-connection server daemon (10.0.0.1:56674). Jan 30 13:14:38.560875 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 56674 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:38.562198 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:38.566399 systemd-logind[1423]: New session 20 of user core. Jan 30 13:14:38.572530 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:14:38.686270 sshd[4117]: Connection closed by 10.0.0.1 port 56674 Jan 30 13:14:38.684297 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:38.689833 systemd[1]: sshd@19-10.0.0.147:22-10.0.0.1:56674.service: Deactivated successfully. Jan 30 13:14:38.691536 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:14:38.692652 systemd-logind[1423]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:14:38.693748 systemd-logind[1423]: Removed session 20. Jan 30 13:14:43.700426 systemd[1]: Started sshd@20-10.0.0.147:22-10.0.0.1:57544.service - OpenSSH per-connection server daemon (10.0.0.1:57544). Jan 30 13:14:43.747909 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 57544 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:43.749801 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:43.756994 systemd-logind[1423]: New session 21 of user core. Jan 30 13:14:43.769577 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:14:43.890211 sshd[4134]: Connection closed by 10.0.0.1 port 57544 Jan 30 13:14:43.890620 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:43.894685 systemd[1]: sshd@20-10.0.0.147:22-10.0.0.1:57544.service: Deactivated successfully. Jan 30 13:14:43.898539 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:14:43.899237 systemd-logind[1423]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:14:43.900250 systemd-logind[1423]: Removed session 21. Jan 30 13:14:48.905497 systemd[1]: Started sshd@21-10.0.0.147:22-10.0.0.1:57548.service - OpenSSH per-connection server daemon (10.0.0.1:57548). Jan 30 13:14:48.954268 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 57548 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:48.955678 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:48.959539 systemd-logind[1423]: New session 22 of user core. Jan 30 13:14:48.973668 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:14:49.090797 sshd[4150]: Connection closed by 10.0.0.1 port 57548 Jan 30 13:14:49.091175 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:49.101906 systemd[1]: sshd@21-10.0.0.147:22-10.0.0.1:57548.service: Deactivated successfully. Jan 30 13:14:49.103705 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:14:49.105468 systemd-logind[1423]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:14:49.110620 systemd[1]: Started sshd@22-10.0.0.147:22-10.0.0.1:57562.service - OpenSSH per-connection server daemon (10.0.0.1:57562). Jan 30 13:14:49.111726 systemd-logind[1423]: Removed session 22. Jan 30 13:14:49.150593 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 57562 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:49.151834 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:49.155924 systemd-logind[1423]: New session 23 of user core. Jan 30 13:14:49.166532 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:14:49.813037 kubelet[2532]: E0130 13:14:49.813003 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:50.939460 containerd[1441]: time="2025-01-30T13:14:50.938710740Z" level=info msg="StopContainer for \"f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8\" with timeout 30 (s)" Jan 30 13:14:50.939460 containerd[1441]: time="2025-01-30T13:14:50.939175217Z" level=info msg="Stop container \"f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8\" with signal terminated" Jan 30 13:14:50.949925 systemd[1]: cri-containerd-f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8.scope: Deactivated successfully. Jan 30 13:14:50.971246 containerd[1441]: time="2025-01-30T13:14:50.971181550Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:14:50.975012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8-rootfs.mount: Deactivated successfully. Jan 30 13:14:50.976745 containerd[1441]: time="2025-01-30T13:14:50.976045629Z" level=info msg="StopContainer for \"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff\" with timeout 2 (s)" Jan 30 13:14:50.976745 containerd[1441]: time="2025-01-30T13:14:50.976580540Z" level=info msg="Stop container \"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff\" with signal terminated" Jan 30 13:14:50.983772 systemd-networkd[1374]: lxc_health: Link DOWN Jan 30 13:14:50.983778 systemd-networkd[1374]: lxc_health: Lost carrier Jan 30 13:14:50.985510 containerd[1441]: time="2025-01-30T13:14:50.985449014Z" level=info msg="shim disconnected" id=f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8 namespace=k8s.io Jan 30 13:14:50.985578 containerd[1441]: time="2025-01-30T13:14:50.985507369Z" level=warning msg="cleaning up after shim disconnected" id=f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8 namespace=k8s.io Jan 30 13:14:50.985578 containerd[1441]: time="2025-01-30T13:14:50.985533407Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:51.008992 systemd[1]: cri-containerd-55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff.scope: Deactivated successfully. Jan 30 13:14:51.009367 systemd[1]: cri-containerd-55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff.scope: Consumed 6.896s CPU time. Jan 30 13:14:51.032066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff-rootfs.mount: Deactivated successfully. Jan 30 13:14:51.040966 containerd[1441]: time="2025-01-30T13:14:51.040909457Z" level=info msg="StopContainer for \"f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8\" returns successfully" Jan 30 13:14:51.041616 containerd[1441]: time="2025-01-30T13:14:51.041013408Z" level=info msg="shim disconnected" id=55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff namespace=k8s.io Jan 30 13:14:51.041616 containerd[1441]: time="2025-01-30T13:14:51.041069843Z" level=warning msg="cleaning up after shim disconnected" id=55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff namespace=k8s.io Jan 30 13:14:51.041616 containerd[1441]: time="2025-01-30T13:14:51.041081042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:51.041840 containerd[1441]: time="2025-01-30T13:14:51.041808940Z" level=info msg="StopPodSandbox for \"8103afaad06d4899ba7dbed27f039f3afa55c81130fc9acf1050cd58b16e2923\"" Jan 30 13:14:51.043305 containerd[1441]: time="2025-01-30T13:14:51.043257935Z" level=info msg="Container to stop \"f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:14:51.045429 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8103afaad06d4899ba7dbed27f039f3afa55c81130fc9acf1050cd58b16e2923-shm.mount: Deactivated successfully. Jan 30 13:14:51.051961 systemd[1]: cri-containerd-8103afaad06d4899ba7dbed27f039f3afa55c81130fc9acf1050cd58b16e2923.scope: Deactivated successfully. Jan 30 13:14:51.077314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8103afaad06d4899ba7dbed27f039f3afa55c81130fc9acf1050cd58b16e2923-rootfs.mount: Deactivated successfully. Jan 30 13:14:51.081051 containerd[1441]: time="2025-01-30T13:14:51.080996778Z" level=info msg="StopContainer for \"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff\" returns successfully" Jan 30 13:14:51.081642 containerd[1441]: time="2025-01-30T13:14:51.081616485Z" level=info msg="StopPodSandbox for \"db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a\"" Jan 30 13:14:51.081684 containerd[1441]: time="2025-01-30T13:14:51.081658962Z" level=info msg="Container to stop \"4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:14:51.081684 containerd[1441]: time="2025-01-30T13:14:51.081670081Z" level=info msg="Container to stop \"02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:14:51.081684 containerd[1441]: time="2025-01-30T13:14:51.081678680Z" level=info msg="Container to stop \"a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:14:51.081754 containerd[1441]: time="2025-01-30T13:14:51.081689759Z" level=info msg="Container to stop \"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:14:51.081754 containerd[1441]: time="2025-01-30T13:14:51.081698638Z" level=info msg="Container to stop \"b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:14:51.085897 containerd[1441]: time="2025-01-30T13:14:51.085813405Z" level=info msg="shim disconnected" id=8103afaad06d4899ba7dbed27f039f3afa55c81130fc9acf1050cd58b16e2923 namespace=k8s.io Jan 30 13:14:51.085897 containerd[1441]: time="2025-01-30T13:14:51.085893918Z" level=warning msg="cleaning up after shim disconnected" id=8103afaad06d4899ba7dbed27f039f3afa55c81130fc9acf1050cd58b16e2923 namespace=k8s.io Jan 30 13:14:51.086047 containerd[1441]: time="2025-01-30T13:14:51.085902598Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:51.091564 systemd[1]: cri-containerd-db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a.scope: Deactivated successfully. Jan 30 13:14:51.109118 containerd[1441]: time="2025-01-30T13:14:51.109068650Z" level=info msg="TearDown network for sandbox \"8103afaad06d4899ba7dbed27f039f3afa55c81130fc9acf1050cd58b16e2923\" successfully" Jan 30 13:14:51.109118 containerd[1441]: time="2025-01-30T13:14:51.109104567Z" level=info msg="StopPodSandbox for \"8103afaad06d4899ba7dbed27f039f3afa55c81130fc9acf1050cd58b16e2923\" returns successfully" Jan 30 13:14:51.132515 containerd[1441]: time="2025-01-30T13:14:51.132436526Z" level=info msg="shim disconnected" id=db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a namespace=k8s.io Jan 30 13:14:51.132515 containerd[1441]: time="2025-01-30T13:14:51.132502600Z" level=warning msg="cleaning up after shim disconnected" id=db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a namespace=k8s.io Jan 30 13:14:51.132515 containerd[1441]: time="2025-01-30T13:14:51.132513519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:51.153698 containerd[1441]: time="2025-01-30T13:14:51.153508678Z" level=info msg="TearDown network for sandbox \"db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a\" successfully" Jan 30 13:14:51.153698 containerd[1441]: time="2025-01-30T13:14:51.153548315Z" level=info msg="StopPodSandbox for \"db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a\" returns successfully" Jan 30 13:14:51.257795 kubelet[2532]: I0130 13:14:51.257599 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-bpf-maps\") pod \"58247a37-c70f-452f-9833-a523de0361e9\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " Jan 30 13:14:51.257795 kubelet[2532]: I0130 13:14:51.257643 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-host-proc-sys-net\") pod \"58247a37-c70f-452f-9833-a523de0361e9\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " Jan 30 13:14:51.257795 kubelet[2532]: I0130 13:14:51.257660 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-xtables-lock\") pod \"58247a37-c70f-452f-9833-a523de0361e9\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " Jan 30 13:14:51.257795 kubelet[2532]: I0130 13:14:51.257699 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-cilium-cgroup\") pod \"58247a37-c70f-452f-9833-a523de0361e9\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " Jan 30 13:14:51.257795 kubelet[2532]: I0130 13:14:51.257715 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-etc-cni-netd\") pod \"58247a37-c70f-452f-9833-a523de0361e9\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " Jan 30 13:14:51.257795 kubelet[2532]: I0130 13:14:51.257736 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58247a37-c70f-452f-9833-a523de0361e9-cilium-config-path\") pod \"58247a37-c70f-452f-9833-a523de0361e9\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " Jan 30 13:14:51.258914 kubelet[2532]: I0130 13:14:51.257751 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-hostproc\") pod \"58247a37-c70f-452f-9833-a523de0361e9\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " Jan 30 13:14:51.258914 kubelet[2532]: I0130 13:14:51.257765 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-lib-modules\") pod \"58247a37-c70f-452f-9833-a523de0361e9\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " Jan 30 13:14:51.258914 kubelet[2532]: I0130 13:14:51.257780 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-host-proc-sys-kernel\") pod \"58247a37-c70f-452f-9833-a523de0361e9\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " Jan 30 13:14:51.258914 kubelet[2532]: I0130 13:14:51.257798 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82x4f\" (UniqueName: \"kubernetes.io/projected/cf43b0be-9bc2-457e-84d8-38272311301a-kube-api-access-82x4f\") pod \"cf43b0be-9bc2-457e-84d8-38272311301a\" (UID: \"cf43b0be-9bc2-457e-84d8-38272311301a\") " Jan 30 13:14:51.258914 kubelet[2532]: I0130 13:14:51.257815 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf43b0be-9bc2-457e-84d8-38272311301a-cilium-config-path\") pod \"cf43b0be-9bc2-457e-84d8-38272311301a\" (UID: \"cf43b0be-9bc2-457e-84d8-38272311301a\") " Jan 30 13:14:51.258914 kubelet[2532]: I0130 13:14:51.257926 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58247a37-c70f-452f-9833-a523de0361e9-clustermesh-secrets\") pod \"58247a37-c70f-452f-9833-a523de0361e9\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " Jan 30 13:14:51.261266 kubelet[2532]: I0130 13:14:51.261231 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "58247a37-c70f-452f-9833-a523de0361e9" (UID: "58247a37-c70f-452f-9833-a523de0361e9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.261337 kubelet[2532]: I0130 13:14:51.261269 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "58247a37-c70f-452f-9833-a523de0361e9" (UID: "58247a37-c70f-452f-9833-a523de0361e9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.261337 kubelet[2532]: I0130 13:14:51.261310 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "58247a37-c70f-452f-9833-a523de0361e9" (UID: "58247a37-c70f-452f-9833-a523de0361e9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.261337 kubelet[2532]: I0130 13:14:51.261329 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "58247a37-c70f-452f-9833-a523de0361e9" (UID: "58247a37-c70f-452f-9833-a523de0361e9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.261417 kubelet[2532]: I0130 13:14:51.261334 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "58247a37-c70f-452f-9833-a523de0361e9" (UID: "58247a37-c70f-452f-9833-a523de0361e9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.261417 kubelet[2532]: I0130 13:14:51.261369 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-hostproc" (OuterVolumeSpecName: "hostproc") pod "58247a37-c70f-452f-9833-a523de0361e9" (UID: "58247a37-c70f-452f-9833-a523de0361e9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.261701 kubelet[2532]: I0130 13:14:51.261472 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "58247a37-c70f-452f-9833-a523de0361e9" (UID: "58247a37-c70f-452f-9833-a523de0361e9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.261701 kubelet[2532]: I0130 13:14:51.261521 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "58247a37-c70f-452f-9833-a523de0361e9" (UID: "58247a37-c70f-452f-9833-a523de0361e9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.268005 kubelet[2532]: I0130 13:14:51.267904 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf43b0be-9bc2-457e-84d8-38272311301a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cf43b0be-9bc2-457e-84d8-38272311301a" (UID: "cf43b0be-9bc2-457e-84d8-38272311301a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 13:14:51.268103 kubelet[2532]: I0130 13:14:51.268048 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58247a37-c70f-452f-9833-a523de0361e9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "58247a37-c70f-452f-9833-a523de0361e9" (UID: "58247a37-c70f-452f-9833-a523de0361e9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 13:14:51.268398 kubelet[2532]: I0130 13:14:51.268373 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf43b0be-9bc2-457e-84d8-38272311301a-kube-api-access-82x4f" (OuterVolumeSpecName: "kube-api-access-82x4f") pod "cf43b0be-9bc2-457e-84d8-38272311301a" (UID: "cf43b0be-9bc2-457e-84d8-38272311301a"). InnerVolumeSpecName "kube-api-access-82x4f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:14:51.269495 kubelet[2532]: I0130 13:14:51.269460 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58247a37-c70f-452f-9833-a523de0361e9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "58247a37-c70f-452f-9833-a523de0361e9" (UID: "58247a37-c70f-452f-9833-a523de0361e9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 13:14:51.358263 kubelet[2532]: I0130 13:14:51.358206 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58247a37-c70f-452f-9833-a523de0361e9-hubble-tls\") pod \"58247a37-c70f-452f-9833-a523de0361e9\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " Jan 30 13:14:51.358263 kubelet[2532]: I0130 13:14:51.358255 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm24h\" (UniqueName: \"kubernetes.io/projected/58247a37-c70f-452f-9833-a523de0361e9-kube-api-access-dm24h\") pod \"58247a37-c70f-452f-9833-a523de0361e9\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " Jan 30 13:14:51.358263 kubelet[2532]: I0130 13:14:51.358276 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-cni-path\") pod \"58247a37-c70f-452f-9833-a523de0361e9\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " Jan 30 13:14:51.358471 kubelet[2532]: I0130 13:14:51.358291 2532 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-cilium-run\") pod \"58247a37-c70f-452f-9833-a523de0361e9\" (UID: \"58247a37-c70f-452f-9833-a523de0361e9\") " Jan 30 13:14:51.358471 kubelet[2532]: I0130 13:14:51.358330 2532 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.358471 kubelet[2532]: I0130 13:14:51.358361 2532 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.358471 kubelet[2532]: I0130 13:14:51.358373 2532 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.358471 kubelet[2532]: I0130 13:14:51.358381 2532 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.358471 kubelet[2532]: I0130 13:14:51.358390 2532 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.358471 kubelet[2532]: I0130 13:14:51.358397 2532 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58247a37-c70f-452f-9833-a523de0361e9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.358471 kubelet[2532]: I0130 13:14:51.358404 2532 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.358637 kubelet[2532]: I0130 13:14:51.358412 2532 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.358637 kubelet[2532]: I0130 13:14:51.358420 2532 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.358637 kubelet[2532]: I0130 13:14:51.358428 2532 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-82x4f\" (UniqueName: \"kubernetes.io/projected/cf43b0be-9bc2-457e-84d8-38272311301a-kube-api-access-82x4f\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.358637 kubelet[2532]: I0130 13:14:51.358437 2532 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf43b0be-9bc2-457e-84d8-38272311301a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.358637 kubelet[2532]: I0130 13:14:51.358444 2532 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58247a37-c70f-452f-9833-a523de0361e9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.358637 kubelet[2532]: I0130 13:14:51.358475 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "58247a37-c70f-452f-9833-a523de0361e9" (UID: "58247a37-c70f-452f-9833-a523de0361e9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.358831 kubelet[2532]: I0130 13:14:51.358795 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-cni-path" (OuterVolumeSpecName: "cni-path") pod "58247a37-c70f-452f-9833-a523de0361e9" (UID: "58247a37-c70f-452f-9833-a523de0361e9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.360647 kubelet[2532]: I0130 13:14:51.360610 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58247a37-c70f-452f-9833-a523de0361e9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "58247a37-c70f-452f-9833-a523de0361e9" (UID: "58247a37-c70f-452f-9833-a523de0361e9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:14:51.360715 kubelet[2532]: I0130 13:14:51.360645 2532 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58247a37-c70f-452f-9833-a523de0361e9-kube-api-access-dm24h" (OuterVolumeSpecName: "kube-api-access-dm24h") pod "58247a37-c70f-452f-9833-a523de0361e9" (UID: "58247a37-c70f-452f-9833-a523de0361e9"). InnerVolumeSpecName "kube-api-access-dm24h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:14:51.458911 kubelet[2532]: I0130 13:14:51.458860 2532 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.458911 kubelet[2532]: I0130 13:14:51.458894 2532 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58247a37-c70f-452f-9833-a523de0361e9-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.458911 kubelet[2532]: I0130 13:14:51.458903 2532 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58247a37-c70f-452f-9833-a523de0361e9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.458911 kubelet[2532]: I0130 13:14:51.458913 2532 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dm24h\" (UniqueName: \"kubernetes.io/projected/58247a37-c70f-452f-9833-a523de0361e9-kube-api-access-dm24h\") on node \"localhost\" DevicePath \"\"" Jan 30 13:14:51.954286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a-rootfs.mount: Deactivated successfully. Jan 30 13:14:51.954399 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db7ebeac6b827be283626c2ed271f2df01fd94e7bc4992629fa4425088c51e5a-shm.mount: Deactivated successfully. Jan 30 13:14:51.954456 systemd[1]: var-lib-kubelet-pods-cf43b0be\x2d9bc2\x2d457e\x2d84d8\x2d38272311301a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d82x4f.mount: Deactivated successfully. Jan 30 13:14:51.954512 systemd[1]: var-lib-kubelet-pods-58247a37\x2dc70f\x2d452f\x2d9833\x2da523de0361e9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddm24h.mount: Deactivated successfully. Jan 30 13:14:51.954569 systemd[1]: var-lib-kubelet-pods-58247a37\x2dc70f\x2d452f\x2d9833\x2da523de0361e9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:14:51.954617 systemd[1]: var-lib-kubelet-pods-58247a37\x2dc70f\x2d452f\x2d9833\x2da523de0361e9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:14:52.056407 kubelet[2532]: I0130 13:14:52.056280 2532 scope.go:117] "RemoveContainer" containerID="55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff" Jan 30 13:14:52.059250 containerd[1441]: time="2025-01-30T13:14:52.059203464Z" level=info msg="RemoveContainer for \"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff\"" Jan 30 13:14:52.063395 systemd[1]: Removed slice kubepods-burstable-pod58247a37_c70f_452f_9833_a523de0361e9.slice - libcontainer container kubepods-burstable-pod58247a37_c70f_452f_9833_a523de0361e9.slice. Jan 30 13:14:52.063521 systemd[1]: kubepods-burstable-pod58247a37_c70f_452f_9833_a523de0361e9.slice: Consumed 7.033s CPU time. Jan 30 13:14:52.064782 systemd[1]: Removed slice kubepods-besteffort-podcf43b0be_9bc2_457e_84d8_38272311301a.slice - libcontainer container kubepods-besteffort-podcf43b0be_9bc2_457e_84d8_38272311301a.slice. Jan 30 13:14:52.067248 containerd[1441]: time="2025-01-30T13:14:52.066457037Z" level=info msg="RemoveContainer for \"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff\" returns successfully" Jan 30 13:14:52.069842 kubelet[2532]: I0130 13:14:52.069024 2532 scope.go:117] "RemoveContainer" containerID="a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39" Jan 30 13:14:52.070294 containerd[1441]: time="2025-01-30T13:14:52.070224892Z" level=info msg="RemoveContainer for \"a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39\"" Jan 30 13:14:52.073288 containerd[1441]: time="2025-01-30T13:14:52.073238528Z" level=info msg="RemoveContainer for \"a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39\" returns successfully" Jan 30 13:14:52.073538 kubelet[2532]: I0130 13:14:52.073499 2532 scope.go:117] "RemoveContainer" containerID="b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636" Jan 30 13:14:52.074978 containerd[1441]: time="2025-01-30T13:14:52.074940751Z" level=info msg="RemoveContainer for \"b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636\"" Jan 30 13:14:52.077335 containerd[1441]: time="2025-01-30T13:14:52.077286921Z" level=info msg="RemoveContainer for \"b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636\" returns successfully" Jan 30 13:14:52.077630 kubelet[2532]: I0130 13:14:52.077592 2532 scope.go:117] "RemoveContainer" containerID="02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a" Jan 30 13:14:52.078719 containerd[1441]: time="2025-01-30T13:14:52.078682488Z" level=info msg="RemoveContainer for \"02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a\"" Jan 30 13:14:52.088541 containerd[1441]: time="2025-01-30T13:14:52.088492815Z" level=info msg="RemoveContainer for \"02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a\" returns successfully" Jan 30 13:14:52.088746 kubelet[2532]: I0130 13:14:52.088715 2532 scope.go:117] "RemoveContainer" containerID="4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261" Jan 30 13:14:52.090091 containerd[1441]: time="2025-01-30T13:14:52.090061488Z" level=info msg="RemoveContainer for \"4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261\"" Jan 30 13:14:52.092524 containerd[1441]: time="2025-01-30T13:14:52.092485372Z" level=info msg="RemoveContainer for \"4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261\" returns successfully" Jan 30 13:14:52.092695 kubelet[2532]: I0130 13:14:52.092669 2532 scope.go:117] "RemoveContainer" containerID="55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff" Jan 30 13:14:52.092954 containerd[1441]: time="2025-01-30T13:14:52.092877140Z" level=error msg="ContainerStatus for \"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff\": not found" Jan 30 13:14:52.093049 kubelet[2532]: E0130 13:14:52.093023 2532 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff\": not found" containerID="55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff" Jan 30 13:14:52.093140 kubelet[2532]: I0130 13:14:52.093057 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff"} err="failed to get container status \"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"55aa16552f99b5cd85c590b143cb234b8f427de8df063224cdc17905008aa7ff\": not found" Jan 30 13:14:52.093169 kubelet[2532]: I0130 13:14:52.093142 2532 scope.go:117] "RemoveContainer" containerID="a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39" Jan 30 13:14:52.093468 containerd[1441]: time="2025-01-30T13:14:52.093419056Z" level=error msg="ContainerStatus for \"a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39\": not found" Jan 30 13:14:52.093616 kubelet[2532]: E0130 13:14:52.093579 2532 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39\": not found" containerID="a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39" Jan 30 13:14:52.093653 kubelet[2532]: I0130 13:14:52.093622 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39"} err="failed to get container status \"a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39\": rpc error: code = NotFound desc = an error occurred when try to find container \"a947ebbc8cab3fb536f0ed5a4bdf132fc232589ca3b994fe72a2f6524b606c39\": not found" Jan 30 13:14:52.093653 kubelet[2532]: I0130 13:14:52.093644 2532 scope.go:117] "RemoveContainer" containerID="b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636" Jan 30 13:14:52.093847 containerd[1441]: time="2025-01-30T13:14:52.093818264Z" level=error msg="ContainerStatus for \"b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636\": not found" Jan 30 13:14:52.093953 kubelet[2532]: E0130 13:14:52.093933 2532 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636\": not found" containerID="b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636" Jan 30 13:14:52.093991 kubelet[2532]: I0130 13:14:52.093959 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636"} err="failed to get container status \"b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5bf4ce25643ffd9792b32dc2528ff9db5000e01c26a7a4cf4b31e6cb4518636\": not found" Jan 30 13:14:52.093991 kubelet[2532]: I0130 13:14:52.093974 2532 scope.go:117] "RemoveContainer" containerID="02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a" Jan 30 13:14:52.094205 containerd[1441]: time="2025-01-30T13:14:52.094140518Z" level=error msg="ContainerStatus for \"02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a\": not found" Jan 30 13:14:52.094297 kubelet[2532]: E0130 13:14:52.094273 2532 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a\": not found" containerID="02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a" Jan 30 13:14:52.094335 kubelet[2532]: I0130 13:14:52.094298 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a"} err="failed to get container status \"02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a\": rpc error: code = NotFound desc = an error occurred when try to find container \"02d42c5b834e0dcfe301c8e92ece0134afd5ff788b9cb86004e2b53cae80664a\": not found" Jan 30 13:14:52.094335 kubelet[2532]: I0130 13:14:52.094335 2532 scope.go:117] "RemoveContainer" containerID="4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261" Jan 30 13:14:52.094536 containerd[1441]: time="2025-01-30T13:14:52.094508848Z" level=error msg="ContainerStatus for \"4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261\": not found" Jan 30 13:14:52.094651 kubelet[2532]: E0130 13:14:52.094632 2532 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261\": not found" containerID="4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261" Jan 30 13:14:52.094676 kubelet[2532]: I0130 13:14:52.094658 2532 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261"} err="failed to get container status \"4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c0746d59494d4cb2776cccda74001a649dd89923bd2e342df60ea84fd1cd261\": not found" Jan 30 13:14:52.094676 kubelet[2532]: I0130 13:14:52.094671 2532 scope.go:117] "RemoveContainer" containerID="f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8" Jan 30 13:14:52.095659 containerd[1441]: time="2025-01-30T13:14:52.095636037Z" level=info msg="RemoveContainer for \"f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8\"" Jan 30 13:14:52.097899 containerd[1441]: time="2025-01-30T13:14:52.097855857Z" level=info msg="RemoveContainer for \"f89ed8259c6559bd81ecc0c466342a70029a2e8f6aee691fff4f056baefdc1c8\" returns successfully" Jan 30 13:14:52.816676 kubelet[2532]: I0130 13:14:52.816627 2532 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58247a37-c70f-452f-9833-a523de0361e9" path="/var/lib/kubelet/pods/58247a37-c70f-452f-9833-a523de0361e9/volumes" Jan 30 13:14:52.817228 kubelet[2532]: I0130 13:14:52.817205 2532 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf43b0be-9bc2-457e-84d8-38272311301a" path="/var/lib/kubelet/pods/cf43b0be-9bc2-457e-84d8-38272311301a/volumes" Jan 30 13:14:52.889495 sshd[4164]: Connection closed by 10.0.0.1 port 57562 Jan 30 13:14:52.891130 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:52.899137 systemd[1]: sshd@22-10.0.0.147:22-10.0.0.1:57562.service: Deactivated successfully. Jan 30 13:14:52.900782 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:14:52.900966 systemd[1]: session-23.scope: Consumed 1.088s CPU time. Jan 30 13:14:52.902143 systemd-logind[1423]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:14:52.906803 systemd[1]: Started sshd@23-10.0.0.147:22-10.0.0.1:36570.service - OpenSSH per-connection server daemon (10.0.0.1:36570). Jan 30 13:14:52.907748 systemd-logind[1423]: Removed session 23. Jan 30 13:14:52.967869 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 36570 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:52.969722 sshd-session[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:52.974158 systemd-logind[1423]: New session 24 of user core. Jan 30 13:14:52.983529 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:14:53.451172 sshd[4324]: Connection closed by 10.0.0.1 port 36570 Jan 30 13:14:53.454641 sshd-session[4322]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:53.464717 systemd[1]: sshd@23-10.0.0.147:22-10.0.0.1:36570.service: Deactivated successfully. Jan 30 13:14:53.467517 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:14:53.472958 kubelet[2532]: I0130 13:14:53.472491 2532 memory_manager.go:355] "RemoveStaleState removing state" podUID="58247a37-c70f-452f-9833-a523de0361e9" containerName="cilium-agent" Jan 30 13:14:53.473068 kubelet[2532]: I0130 13:14:53.472987 2532 memory_manager.go:355] "RemoveStaleState removing state" podUID="cf43b0be-9bc2-457e-84d8-38272311301a" containerName="cilium-operator" Jan 30 13:14:53.473144 systemd-logind[1423]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:14:53.480507 systemd[1]: Started sshd@24-10.0.0.147:22-10.0.0.1:36586.service - OpenSSH per-connection server daemon (10.0.0.1:36586). Jan 30 13:14:53.484401 systemd-logind[1423]: Removed session 24. Jan 30 13:14:53.498387 systemd[1]: Created slice kubepods-burstable-pod6830b4ac_f1d6_4167_b811_071751e407a8.slice - libcontainer container kubepods-burstable-pod6830b4ac_f1d6_4167_b811_071751e407a8.slice. Jan 30 13:14:53.531300 sshd[4335]: Accepted publickey for core from 10.0.0.1 port 36586 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:53.532962 sshd-session[4335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:53.538147 systemd-logind[1423]: New session 25 of user core. Jan 30 13:14:53.553555 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:14:53.604125 sshd[4337]: Connection closed by 10.0.0.1 port 36586 Jan 30 13:14:53.605508 sshd-session[4335]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:53.621960 systemd[1]: sshd@24-10.0.0.147:22-10.0.0.1:36586.service: Deactivated successfully. Jan 30 13:14:53.625058 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:14:53.626672 systemd-logind[1423]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:14:53.636917 systemd[1]: Started sshd@25-10.0.0.147:22-10.0.0.1:36594.service - OpenSSH per-connection server daemon (10.0.0.1:36594). Jan 30 13:14:53.637932 systemd-logind[1423]: Removed session 25. Jan 30 13:14:53.671751 kubelet[2532]: I0130 13:14:53.671675 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6830b4ac-f1d6-4167-b811-071751e407a8-cilium-run\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.672507 kubelet[2532]: I0130 13:14:53.672430 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6830b4ac-f1d6-4167-b811-071751e407a8-host-proc-sys-net\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.672857 kubelet[2532]: I0130 13:14:53.672624 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6830b4ac-f1d6-4167-b811-071751e407a8-cilium-ipsec-secrets\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.672857 kubelet[2532]: I0130 13:14:53.672651 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9jfr\" (UniqueName: \"kubernetes.io/projected/6830b4ac-f1d6-4167-b811-071751e407a8-kube-api-access-l9jfr\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.672857 kubelet[2532]: I0130 13:14:53.672669 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6830b4ac-f1d6-4167-b811-071751e407a8-hostproc\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.672857 kubelet[2532]: I0130 13:14:53.672685 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6830b4ac-f1d6-4167-b811-071751e407a8-cilium-cgroup\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.672857 kubelet[2532]: I0130 13:14:53.672719 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6830b4ac-f1d6-4167-b811-071751e407a8-hubble-tls\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.672857 kubelet[2532]: I0130 13:14:53.672747 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6830b4ac-f1d6-4167-b811-071751e407a8-xtables-lock\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.673112 kubelet[2532]: I0130 13:14:53.672766 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6830b4ac-f1d6-4167-b811-071751e407a8-host-proc-sys-kernel\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.673112 kubelet[2532]: I0130 13:14:53.672793 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6830b4ac-f1d6-4167-b811-071751e407a8-cni-path\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.673112 kubelet[2532]: I0130 13:14:53.672816 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6830b4ac-f1d6-4167-b811-071751e407a8-clustermesh-secrets\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.673112 kubelet[2532]: I0130 13:14:53.672837 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6830b4ac-f1d6-4167-b811-071751e407a8-bpf-maps\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.673112 kubelet[2532]: I0130 13:14:53.672943 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6830b4ac-f1d6-4167-b811-071751e407a8-etc-cni-netd\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.673112 kubelet[2532]: I0130 13:14:53.673065 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6830b4ac-f1d6-4167-b811-071751e407a8-lib-modules\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.673242 kubelet[2532]: I0130 13:14:53.673084 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6830b4ac-f1d6-4167-b811-071751e407a8-cilium-config-path\") pod \"cilium-flw5p\" (UID: \"6830b4ac-f1d6-4167-b811-071751e407a8\") " pod="kube-system/cilium-flw5p" Jan 30 13:14:53.674492 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 36594 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:53.676007 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:53.680047 systemd-logind[1423]: New session 26 of user core. Jan 30 13:14:53.686524 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:14:53.803148 kubelet[2532]: E0130 13:14:53.803112 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:53.803675 containerd[1441]: time="2025-01-30T13:14:53.803629483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-flw5p,Uid:6830b4ac-f1d6-4167-b811-071751e407a8,Namespace:kube-system,Attempt:0,}" Jan 30 13:14:53.828537 containerd[1441]: time="2025-01-30T13:14:53.828445034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:14:53.828947 containerd[1441]: time="2025-01-30T13:14:53.828910838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:14:53.828993 containerd[1441]: time="2025-01-30T13:14:53.828959474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:53.829095 containerd[1441]: time="2025-01-30T13:14:53.829069626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:53.848565 systemd[1]: Started cri-containerd-4ef0e0a4146609691b43570abdb591146457c76dab626dad5feee5f1fd6bbe2b.scope - libcontainer container 4ef0e0a4146609691b43570abdb591146457c76dab626dad5feee5f1fd6bbe2b. Jan 30 13:14:53.875039 containerd[1441]: time="2025-01-30T13:14:53.874997289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-flw5p,Uid:6830b4ac-f1d6-4167-b811-071751e407a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ef0e0a4146609691b43570abdb591146457c76dab626dad5feee5f1fd6bbe2b\"" Jan 30 13:14:53.875899 kubelet[2532]: E0130 13:14:53.875878 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:53.879387 containerd[1441]: time="2025-01-30T13:14:53.879292082Z" level=info msg="CreateContainer within sandbox \"4ef0e0a4146609691b43570abdb591146457c76dab626dad5feee5f1fd6bbe2b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:14:53.897931 containerd[1441]: time="2025-01-30T13:14:53.897874427Z" level=info msg="CreateContainer within sandbox \"4ef0e0a4146609691b43570abdb591146457c76dab626dad5feee5f1fd6bbe2b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f7ca1a0f33f6e81177477043dda609ab642c028a6434f0f002a57a62ef630184\"" Jan 30 13:14:53.899886 containerd[1441]: time="2025-01-30T13:14:53.898559014Z" level=info msg="StartContainer for \"f7ca1a0f33f6e81177477043dda609ab642c028a6434f0f002a57a62ef630184\"" Jan 30 13:14:53.913005 kubelet[2532]: E0130 13:14:53.912883 2532 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:14:53.929569 systemd[1]: Started cri-containerd-f7ca1a0f33f6e81177477043dda609ab642c028a6434f0f002a57a62ef630184.scope - libcontainer container f7ca1a0f33f6e81177477043dda609ab642c028a6434f0f002a57a62ef630184. Jan 30 13:14:53.964958 systemd[1]: cri-containerd-f7ca1a0f33f6e81177477043dda609ab642c028a6434f0f002a57a62ef630184.scope: Deactivated successfully. Jan 30 13:14:53.974549 containerd[1441]: time="2025-01-30T13:14:53.974440756Z" level=info msg="StartContainer for \"f7ca1a0f33f6e81177477043dda609ab642c028a6434f0f002a57a62ef630184\" returns successfully" Jan 30 13:14:54.066136 kubelet[2532]: E0130 13:14:54.065144 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:54.127134 containerd[1441]: time="2025-01-30T13:14:54.127074430Z" level=info msg="shim disconnected" id=f7ca1a0f33f6e81177477043dda609ab642c028a6434f0f002a57a62ef630184 namespace=k8s.io Jan 30 13:14:54.127134 containerd[1441]: time="2025-01-30T13:14:54.127128666Z" level=warning msg="cleaning up after shim disconnected" id=f7ca1a0f33f6e81177477043dda609ab642c028a6434f0f002a57a62ef630184 namespace=k8s.io Jan 30 13:14:54.127134 containerd[1441]: time="2025-01-30T13:14:54.127137346Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:55.069429 kubelet[2532]: E0130 13:14:55.069378 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:55.072019 containerd[1441]: time="2025-01-30T13:14:55.071973604Z" level=info msg="CreateContainer within sandbox \"4ef0e0a4146609691b43570abdb591146457c76dab626dad5feee5f1fd6bbe2b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:14:55.094626 containerd[1441]: time="2025-01-30T13:14:55.094567847Z" level=info msg="CreateContainer within sandbox \"4ef0e0a4146609691b43570abdb591146457c76dab626dad5feee5f1fd6bbe2b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dca05d06355900e7d2bac2c50f094e5201e56e6e871923ba5122447c72bba6df\"" Jan 30 13:14:55.096379 containerd[1441]: time="2025-01-30T13:14:55.096336768Z" level=info msg="StartContainer for \"dca05d06355900e7d2bac2c50f094e5201e56e6e871923ba5122447c72bba6df\"" Jan 30 13:14:55.125561 systemd[1]: Started cri-containerd-dca05d06355900e7d2bac2c50f094e5201e56e6e871923ba5122447c72bba6df.scope - libcontainer container dca05d06355900e7d2bac2c50f094e5201e56e6e871923ba5122447c72bba6df. Jan 30 13:14:55.151204 containerd[1441]: time="2025-01-30T13:14:55.151147050Z" level=info msg="StartContainer for \"dca05d06355900e7d2bac2c50f094e5201e56e6e871923ba5122447c72bba6df\" returns successfully" Jan 30 13:14:55.156293 systemd[1]: cri-containerd-dca05d06355900e7d2bac2c50f094e5201e56e6e871923ba5122447c72bba6df.scope: Deactivated successfully. Jan 30 13:14:55.200230 containerd[1441]: time="2025-01-30T13:14:55.200154441Z" level=info msg="shim disconnected" id=dca05d06355900e7d2bac2c50f094e5201e56e6e871923ba5122447c72bba6df namespace=k8s.io Jan 30 13:14:55.200230 containerd[1441]: time="2025-01-30T13:14:55.200210837Z" level=warning msg="cleaning up after shim disconnected" id=dca05d06355900e7d2bac2c50f094e5201e56e6e871923ba5122447c72bba6df namespace=k8s.io Jan 30 13:14:55.200230 containerd[1441]: time="2025-01-30T13:14:55.200218877Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:56.073596 kubelet[2532]: E0130 13:14:56.073566 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:56.075991 containerd[1441]: time="2025-01-30T13:14:56.075954465Z" level=info msg="CreateContainer within sandbox \"4ef0e0a4146609691b43570abdb591146457c76dab626dad5feee5f1fd6bbe2b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:14:56.094232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3690273375.mount: Deactivated successfully. Jan 30 13:14:56.099246 containerd[1441]: time="2025-01-30T13:14:56.099201765Z" level=info msg="CreateContainer within sandbox \"4ef0e0a4146609691b43570abdb591146457c76dab626dad5feee5f1fd6bbe2b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"447c8e1c7191ec254f435fb93b234580f0a96a072d1b9676db3fef17b5b4da32\"" Jan 30 13:14:56.099817 containerd[1441]: time="2025-01-30T13:14:56.099716093Z" level=info msg="StartContainer for \"447c8e1c7191ec254f435fb93b234580f0a96a072d1b9676db3fef17b5b4da32\"" Jan 30 13:14:56.131543 systemd[1]: Started cri-containerd-447c8e1c7191ec254f435fb93b234580f0a96a072d1b9676db3fef17b5b4da32.scope - libcontainer container 447c8e1c7191ec254f435fb93b234580f0a96a072d1b9676db3fef17b5b4da32. Jan 30 13:14:56.160674 containerd[1441]: time="2025-01-30T13:14:56.160585350Z" level=info msg="StartContainer for \"447c8e1c7191ec254f435fb93b234580f0a96a072d1b9676db3fef17b5b4da32\" returns successfully" Jan 30 13:14:56.161088 systemd[1]: cri-containerd-447c8e1c7191ec254f435fb93b234580f0a96a072d1b9676db3fef17b5b4da32.scope: Deactivated successfully. Jan 30 13:14:56.184899 containerd[1441]: time="2025-01-30T13:14:56.184683396Z" level=info msg="shim disconnected" id=447c8e1c7191ec254f435fb93b234580f0a96a072d1b9676db3fef17b5b4da32 namespace=k8s.io Jan 30 13:14:56.184899 containerd[1441]: time="2025-01-30T13:14:56.184751272Z" level=warning msg="cleaning up after shim disconnected" id=447c8e1c7191ec254f435fb93b234580f0a96a072d1b9676db3fef17b5b4da32 namespace=k8s.io Jan 30 13:14:56.184899 containerd[1441]: time="2025-01-30T13:14:56.184759711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:56.780039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-447c8e1c7191ec254f435fb93b234580f0a96a072d1b9676db3fef17b5b4da32-rootfs.mount: Deactivated successfully. Jan 30 13:14:57.077241 kubelet[2532]: E0130 13:14:57.076742 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:57.079890 containerd[1441]: time="2025-01-30T13:14:57.079832101Z" level=info msg="CreateContainer within sandbox \"4ef0e0a4146609691b43570abdb591146457c76dab626dad5feee5f1fd6bbe2b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:14:57.112309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2138795074.mount: Deactivated successfully. Jan 30 13:14:57.124434 containerd[1441]: time="2025-01-30T13:14:57.124215059Z" level=info msg="CreateContainer within sandbox \"4ef0e0a4146609691b43570abdb591146457c76dab626dad5feee5f1fd6bbe2b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ce23eb6d87687e6db806503f84741e841a08b407ac43db124727c41d0303575c\"" Jan 30 13:14:57.126473 containerd[1441]: time="2025-01-30T13:14:57.125060970Z" level=info msg="StartContainer for \"ce23eb6d87687e6db806503f84741e841a08b407ac43db124727c41d0303575c\"" Jan 30 13:14:57.160558 systemd[1]: Started cri-containerd-ce23eb6d87687e6db806503f84741e841a08b407ac43db124727c41d0303575c.scope - libcontainer container ce23eb6d87687e6db806503f84741e841a08b407ac43db124727c41d0303575c. Jan 30 13:14:57.183851 systemd[1]: cri-containerd-ce23eb6d87687e6db806503f84741e841a08b407ac43db124727c41d0303575c.scope: Deactivated successfully. Jan 30 13:14:57.188459 containerd[1441]: time="2025-01-30T13:14:57.188327100Z" level=info msg="StartContainer for \"ce23eb6d87687e6db806503f84741e841a08b407ac43db124727c41d0303575c\" returns successfully" Jan 30 13:14:57.205771 containerd[1441]: time="2025-01-30T13:14:57.188572406Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6830b4ac_f1d6_4167_b811_071751e407a8.slice/cri-containerd-ce23eb6d87687e6db806503f84741e841a08b407ac43db124727c41d0303575c.scope/memory.events\": no such file or directory" Jan 30 13:14:57.226561 containerd[1441]: time="2025-01-30T13:14:57.226485943Z" level=info msg="shim disconnected" id=ce23eb6d87687e6db806503f84741e841a08b407ac43db124727c41d0303575c namespace=k8s.io Jan 30 13:14:57.226561 containerd[1441]: time="2025-01-30T13:14:57.226547019Z" level=warning msg="cleaning up after shim disconnected" id=ce23eb6d87687e6db806503f84741e841a08b407ac43db124727c41d0303575c namespace=k8s.io Jan 30 13:14:57.226561 containerd[1441]: time="2025-01-30T13:14:57.226554899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:57.779064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce23eb6d87687e6db806503f84741e841a08b407ac43db124727c41d0303575c-rootfs.mount: Deactivated successfully. Jan 30 13:14:58.082936 kubelet[2532]: E0130 13:14:58.082011 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:58.085003 containerd[1441]: time="2025-01-30T13:14:58.084955591Z" level=info msg="CreateContainer within sandbox \"4ef0e0a4146609691b43570abdb591146457c76dab626dad5feee5f1fd6bbe2b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:14:58.102073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3189513747.mount: Deactivated successfully. Jan 30 13:14:58.112032 containerd[1441]: time="2025-01-30T13:14:58.111975436Z" level=info msg="CreateContainer within sandbox \"4ef0e0a4146609691b43570abdb591146457c76dab626dad5feee5f1fd6bbe2b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7ed9523dfefe8cafa747cd77767bbf0e397833632ce6e3cdc0573adb2fad7f1b\"" Jan 30 13:14:58.114795 containerd[1441]: time="2025-01-30T13:14:58.114463660Z" level=info msg="StartContainer for \"7ed9523dfefe8cafa747cd77767bbf0e397833632ce6e3cdc0573adb2fad7f1b\"" Jan 30 13:14:58.144569 systemd[1]: Started cri-containerd-7ed9523dfefe8cafa747cd77767bbf0e397833632ce6e3cdc0573adb2fad7f1b.scope - libcontainer container 7ed9523dfefe8cafa747cd77767bbf0e397833632ce6e3cdc0573adb2fad7f1b. Jan 30 13:14:58.169134 containerd[1441]: time="2025-01-30T13:14:58.169085758Z" level=info msg="StartContainer for \"7ed9523dfefe8cafa747cd77767bbf0e397833632ce6e3cdc0573adb2fad7f1b\" returns successfully" Jan 30 13:14:58.457440 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 13:14:59.087251 kubelet[2532]: E0130 13:14:59.087216 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:59.106801 kubelet[2532]: I0130 13:14:59.106288 2532 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-flw5p" podStartSLOduration=6.106270384 podStartE2EDuration="6.106270384s" podCreationTimestamp="2025-01-30 13:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:14:59.105655855 +0000 UTC m=+80.393228495" watchObservedRunningTime="2025-01-30 13:14:59.106270384 +0000 UTC m=+80.393843024" Jan 30 13:15:00.053007 systemd[1]: run-containerd-runc-k8s.io-7ed9523dfefe8cafa747cd77767bbf0e397833632ce6e3cdc0573adb2fad7f1b-runc.zyh3gj.mount: Deactivated successfully. Jan 30 13:15:00.091422 kubelet[2532]: E0130 13:15:00.089087 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:00.812880 kubelet[2532]: E0130 13:15:00.812836 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:01.353034 systemd-networkd[1374]: lxc_health: Link UP Jan 30 13:15:01.363636 systemd-networkd[1374]: lxc_health: Gained carrier Jan 30 13:15:01.805718 kubelet[2532]: E0130 13:15:01.805668 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:02.091697 kubelet[2532]: E0130 13:15:02.091583 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:02.795628 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jan 30 13:15:03.093046 kubelet[2532]: E0130 13:15:03.092936 2532 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:06.457291 sshd[4345]: Connection closed by 10.0.0.1 port 36594 Jan 30 13:15:06.457819 sshd-session[4343]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:06.460497 systemd[1]: sshd@25-10.0.0.147:22-10.0.0.1:36594.service: Deactivated successfully. Jan 30 13:15:06.462231 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:15:06.463542 systemd-logind[1423]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:15:06.464700 systemd-logind[1423]: Removed session 26.