Jan 17 12:02:17.910975 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 12:02:17.910996 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 12:02:17.911005 kernel: KASLR enabled Jan 17 12:02:17.911011 kernel: efi: EFI v2.7 by EDK II Jan 17 12:02:17.911017 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 17 12:02:17.911022 kernel: random: crng init done Jan 17 12:02:17.911029 kernel: ACPI: Early table checksum verification disabled Jan 17 12:02:17.911035 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 17 12:02:17.911041 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 17 12:02:17.911049 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:02:17.911055 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:02:17.911061 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:02:17.911066 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:02:17.911072 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:02:17.911080 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:02:17.911087 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:02:17.911094 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:02:17.911100 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:02:17.911106 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 17 12:02:17.911112 kernel: NUMA: Failed to initialise from firmware Jan 17 12:02:17.911119 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 12:02:17.911125 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 17 12:02:17.911131 kernel: Zone ranges: Jan 17 12:02:17.911137 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 12:02:17.911144 kernel: DMA32 empty Jan 17 12:02:17.911151 kernel: Normal empty Jan 17 12:02:17.911157 kernel: Movable zone start for each node Jan 17 12:02:17.911164 kernel: Early memory node ranges Jan 17 12:02:17.911170 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 17 12:02:17.911176 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 17 12:02:17.911182 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 17 12:02:17.911189 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 17 12:02:17.911195 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 17 12:02:17.911201 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 17 12:02:17.911208 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 17 12:02:17.911214 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 17 12:02:17.911220 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 17 12:02:17.911228 kernel: psci: probing for conduit method from ACPI. Jan 17 12:02:17.911234 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 12:02:17.911240 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 12:02:17.911249 kernel: psci: Trusted OS migration not required Jan 17 12:02:17.911256 kernel: psci: SMC Calling Convention v1.1 Jan 17 12:02:17.911263 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 17 12:02:17.911271 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 12:02:17.911277 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 12:02:17.911284 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 17 12:02:17.911291 kernel: Detected PIPT I-cache on CPU0 Jan 17 12:02:17.911298 kernel: CPU features: detected: GIC system register CPU interface Jan 17 12:02:17.911304 kernel: CPU features: detected: Hardware dirty bit management Jan 17 12:02:17.911311 kernel: CPU features: detected: Spectre-v4 Jan 17 12:02:17.911318 kernel: CPU features: detected: Spectre-BHB Jan 17 12:02:17.911324 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 12:02:17.911332 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 12:02:17.911340 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 12:02:17.911346 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 12:02:17.911357 kernel: alternatives: applying boot alternatives Jan 17 12:02:17.911365 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:02:17.911372 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:02:17.911379 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:02:17.911386 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:02:17.911393 kernel: Fallback order for Node 0: 0 Jan 17 12:02:17.911400 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 17 12:02:17.911406 kernel: Policy zone: DMA Jan 17 12:02:17.911413 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:02:17.911421 kernel: software IO TLB: area num 4. Jan 17 12:02:17.911428 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 17 12:02:17.911435 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 17 12:02:17.911442 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 12:02:17.911449 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:02:17.911456 kernel: rcu: RCU event tracing is enabled. Jan 17 12:02:17.911463 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 12:02:17.911470 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:02:17.911476 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:02:17.911483 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:02:17.911490 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 12:02:17.911497 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 12:02:17.911517 kernel: GICv3: 256 SPIs implemented Jan 17 12:02:17.911524 kernel: GICv3: 0 Extended SPIs implemented Jan 17 12:02:17.911531 kernel: Root IRQ handler: gic_handle_irq Jan 17 12:02:17.911537 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 17 12:02:17.911544 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 17 12:02:17.911551 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 17 12:02:17.911557 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 12:02:17.911564 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 17 12:02:17.911571 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 17 12:02:17.911578 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 17 12:02:17.911585 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:02:17.911593 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:02:17.911600 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 12:02:17.911607 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 12:02:17.911614 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 12:02:17.911621 kernel: arm-pv: using stolen time PV Jan 17 12:02:17.911628 kernel: Console: colour dummy device 80x25 Jan 17 12:02:17.911634 kernel: ACPI: Core revision 20230628 Jan 17 12:02:17.911641 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 12:02:17.911648 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:02:17.911655 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:02:17.911663 kernel: landlock: Up and running. Jan 17 12:02:17.911670 kernel: SELinux: Initializing. Jan 17 12:02:17.911677 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:02:17.911684 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:02:17.911691 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:02:17.911698 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:02:17.911705 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:02:17.911712 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:02:17.911719 kernel: Platform MSI: ITS@0x8080000 domain created Jan 17 12:02:17.911727 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 17 12:02:17.911733 kernel: Remapping and enabling EFI services. Jan 17 12:02:17.911740 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:02:17.911747 kernel: Detected PIPT I-cache on CPU1 Jan 17 12:02:17.911754 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 17 12:02:17.911761 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 17 12:02:17.911768 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:02:17.911775 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 12:02:17.911781 kernel: Detected PIPT I-cache on CPU2 Jan 17 12:02:17.911788 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 17 12:02:17.911796 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 17 12:02:17.911810 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:02:17.911821 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 17 12:02:17.911830 kernel: Detected PIPT I-cache on CPU3 Jan 17 12:02:17.911837 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 17 12:02:17.911845 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 17 12:02:17.911852 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:02:17.911859 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 17 12:02:17.911866 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 12:02:17.911874 kernel: SMP: Total of 4 processors activated. Jan 17 12:02:17.911881 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 12:02:17.911889 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 12:02:17.911896 kernel: CPU features: detected: Common not Private translations Jan 17 12:02:17.911903 kernel: CPU features: detected: CRC32 instructions Jan 17 12:02:17.911911 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 17 12:02:17.911918 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 12:02:17.911925 kernel: CPU features: detected: LSE atomic instructions Jan 17 12:02:17.911933 kernel: CPU features: detected: Privileged Access Never Jan 17 12:02:17.911940 kernel: CPU features: detected: RAS Extension Support Jan 17 12:02:17.911948 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 17 12:02:17.911955 kernel: CPU: All CPU(s) started at EL1 Jan 17 12:02:17.911962 kernel: alternatives: applying system-wide alternatives Jan 17 12:02:17.911969 kernel: devtmpfs: initialized Jan 17 12:02:17.911976 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:02:17.911984 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 12:02:17.911991 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:02:17.911999 kernel: SMBIOS 3.0.0 present. Jan 17 12:02:17.912007 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 17 12:02:17.912014 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:02:17.912021 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 12:02:17.912028 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 12:02:17.912036 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 12:02:17.912043 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:02:17.912050 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jan 17 12:02:17.912057 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:02:17.912065 kernel: cpuidle: using governor menu Jan 17 12:02:17.912073 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 12:02:17.912080 kernel: ASID allocator initialised with 32768 entries Jan 17 12:02:17.912087 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:02:17.912094 kernel: Serial: AMBA PL011 UART driver Jan 17 12:02:17.912101 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 12:02:17.912108 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 12:02:17.912115 kernel: Modules: 509040 pages in range for PLT usage Jan 17 12:02:17.912123 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:02:17.912131 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:02:17.912138 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 12:02:17.912146 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 12:02:17.912153 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:02:17.912160 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:02:17.912167 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 12:02:17.912174 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 12:02:17.912181 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:02:17.912188 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:02:17.912197 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:02:17.912204 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:02:17.912211 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:02:17.912218 kernel: ACPI: Interpreter enabled Jan 17 12:02:17.912225 kernel: ACPI: Using GIC for interrupt routing Jan 17 12:02:17.912233 kernel: ACPI: MCFG table detected, 1 entries Jan 17 12:02:17.912240 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 17 12:02:17.912247 kernel: printk: console [ttyAMA0] enabled Jan 17 12:02:17.912255 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:02:17.912382 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:02:17.912454 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 12:02:17.912535 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 12:02:17.912602 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 17 12:02:17.912665 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 17 12:02:17.912675 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 17 12:02:17.912682 kernel: PCI host bridge to bus 0000:00 Jan 17 12:02:17.912753 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 17 12:02:17.912822 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 12:02:17.912882 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 17 12:02:17.912940 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:02:17.913019 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 17 12:02:17.913094 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:02:17.913163 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 17 12:02:17.913227 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 17 12:02:17.913294 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 12:02:17.913359 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 12:02:17.913423 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 17 12:02:17.913487 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 17 12:02:17.913576 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 17 12:02:17.913638 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 12:02:17.913697 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 17 12:02:17.913706 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 12:02:17.913714 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 12:02:17.913722 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 12:02:17.913729 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 12:02:17.913736 kernel: iommu: Default domain type: Translated Jan 17 12:02:17.913743 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 12:02:17.913752 kernel: efivars: Registered efivars operations Jan 17 12:02:17.913759 kernel: vgaarb: loaded Jan 17 12:02:17.913767 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 12:02:17.913774 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:02:17.913781 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:02:17.913789 kernel: pnp: PnP ACPI init Jan 17 12:02:17.913872 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 17 12:02:17.913884 kernel: pnp: PnP ACPI: found 1 devices Jan 17 12:02:17.913891 kernel: NET: Registered PF_INET protocol family Jan 17 12:02:17.913901 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:02:17.913908 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:02:17.913916 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:02:17.913923 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:02:17.913931 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:02:17.913938 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:02:17.913946 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:02:17.913953 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:02:17.913962 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:02:17.913969 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:02:17.913976 kernel: kvm [1]: HYP mode not available Jan 17 12:02:17.913983 kernel: Initialise system trusted keyrings Jan 17 12:02:17.913991 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:02:17.913998 kernel: Key type asymmetric registered Jan 17 12:02:17.914005 kernel: Asymmetric key parser 'x509' registered Jan 17 12:02:17.914012 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 12:02:17.914020 kernel: io scheduler mq-deadline registered Jan 17 12:02:17.914027 kernel: io scheduler kyber registered Jan 17 12:02:17.914036 kernel: io scheduler bfq registered Jan 17 12:02:17.914043 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 12:02:17.914050 kernel: ACPI: button: Power Button [PWRB] Jan 17 12:02:17.914058 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 12:02:17.914126 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 17 12:02:17.914136 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:02:17.914144 kernel: thunder_xcv, ver 1.0 Jan 17 12:02:17.914151 kernel: thunder_bgx, ver 1.0 Jan 17 12:02:17.914159 kernel: nicpf, ver 1.0 Jan 17 12:02:17.914168 kernel: nicvf, ver 1.0 Jan 17 12:02:17.914246 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 12:02:17.914310 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T12:02:17 UTC (1737115337) Jan 17 12:02:17.914320 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:02:17.914327 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 17 12:02:17.914335 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 12:02:17.914342 kernel: watchdog: Hard watchdog permanently disabled Jan 17 12:02:17.914349 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:02:17.914359 kernel: Segment Routing with IPv6 Jan 17 12:02:17.914366 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:02:17.914373 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:02:17.914380 kernel: Key type dns_resolver registered Jan 17 12:02:17.914387 kernel: registered taskstats version 1 Jan 17 12:02:17.914394 kernel: Loading compiled-in X.509 certificates Jan 17 12:02:17.914401 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 12:02:17.914409 kernel: Key type .fscrypt registered Jan 17 12:02:17.914416 kernel: Key type fscrypt-provisioning registered Jan 17 12:02:17.914424 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:02:17.914432 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:02:17.914439 kernel: ima: No architecture policies found Jan 17 12:02:17.914447 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 12:02:17.914454 kernel: clk: Disabling unused clocks Jan 17 12:02:17.914461 kernel: Freeing unused kernel memory: 39360K Jan 17 12:02:17.914479 kernel: Run /init as init process Jan 17 12:02:17.914487 kernel: with arguments: Jan 17 12:02:17.914495 kernel: /init Jan 17 12:02:17.914514 kernel: with environment: Jan 17 12:02:17.914521 kernel: HOME=/ Jan 17 12:02:17.914528 kernel: TERM=linux Jan 17 12:02:17.914536 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:02:17.914545 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:02:17.914554 systemd[1]: Detected virtualization kvm. Jan 17 12:02:17.914562 systemd[1]: Detected architecture arm64. Jan 17 12:02:17.914572 systemd[1]: Running in initrd. Jan 17 12:02:17.914580 systemd[1]: No hostname configured, using default hostname. Jan 17 12:02:17.914588 systemd[1]: Hostname set to . Jan 17 12:02:17.914597 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:02:17.914605 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:02:17.914613 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:02:17.914621 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:02:17.914629 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:02:17.914638 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:02:17.914646 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:02:17.914654 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:02:17.914664 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:02:17.914672 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:02:17.914680 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:02:17.914688 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:02:17.914698 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:02:17.914706 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:02:17.914714 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:02:17.914725 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:02:17.914733 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:02:17.914742 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:02:17.914749 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:02:17.914767 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:02:17.914780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:02:17.914791 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:02:17.914803 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:02:17.914812 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:02:17.914820 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:02:17.914828 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:02:17.914836 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:02:17.914844 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:02:17.914852 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:02:17.914862 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:02:17.914870 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:02:17.914879 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:02:17.914887 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:02:17.914895 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:02:17.914903 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:02:17.914913 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:02:17.914938 systemd-journald[236]: Collecting audit messages is disabled. Jan 17 12:02:17.914957 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:02:17.914967 systemd-journald[236]: Journal started Jan 17 12:02:17.914985 systemd-journald[236]: Runtime Journal (/run/log/journal/51b44ec5e8244f5f9ef960d12670d6d5) is 5.9M, max 47.3M, 41.4M free. Jan 17 12:02:17.901132 systemd-modules-load[238]: Inserted module 'overlay' Jan 17 12:02:17.916702 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:02:17.917586 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:02:17.917749 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:02:17.920899 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 17 12:02:17.921894 kernel: Bridge firewalling registered Jan 17 12:02:17.922525 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:02:17.924633 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:02:17.925893 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:02:17.930511 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:02:17.934582 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:02:17.937717 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:02:17.938636 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:02:17.940384 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:02:17.949639 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:02:17.951466 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:02:17.964834 dracut-cmdline[274]: dracut-dracut-053 Jan 17 12:02:17.967355 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:02:17.982319 systemd-resolved[277]: Positive Trust Anchors: Jan 17 12:02:17.982335 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:02:17.982365 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:02:17.986943 systemd-resolved[277]: Defaulting to hostname 'linux'. Jan 17 12:02:17.987876 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:02:17.989364 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:02:18.038532 kernel: SCSI subsystem initialized Jan 17 12:02:18.043525 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:02:18.050526 kernel: iscsi: registered transport (tcp) Jan 17 12:02:18.063530 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:02:18.063545 kernel: QLogic iSCSI HBA Driver Jan 17 12:02:18.106560 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:02:18.118675 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:02:18.137193 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:02:18.137247 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:02:18.137259 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:02:18.182534 kernel: raid6: neonx8 gen() 15723 MB/s Jan 17 12:02:18.199514 kernel: raid6: neonx4 gen() 15609 MB/s Jan 17 12:02:18.216520 kernel: raid6: neonx2 gen() 13261 MB/s Jan 17 12:02:18.233513 kernel: raid6: neonx1 gen() 10445 MB/s Jan 17 12:02:18.250524 kernel: raid6: int64x8 gen() 6938 MB/s Jan 17 12:02:18.267513 kernel: raid6: int64x4 gen() 7331 MB/s Jan 17 12:02:18.284524 kernel: raid6: int64x2 gen() 6109 MB/s Jan 17 12:02:18.301521 kernel: raid6: int64x1 gen() 5037 MB/s Jan 17 12:02:18.301535 kernel: raid6: using algorithm neonx8 gen() 15723 MB/s Jan 17 12:02:18.318518 kernel: raid6: .... xor() 11882 MB/s, rmw enabled Jan 17 12:02:18.318532 kernel: raid6: using neon recovery algorithm Jan 17 12:02:18.323515 kernel: xor: measuring software checksum speed Jan 17 12:02:18.323534 kernel: 8regs : 19807 MB/sec Jan 17 12:02:18.324898 kernel: 32regs : 18557 MB/sec Jan 17 12:02:18.324911 kernel: arm64_neon : 26927 MB/sec Jan 17 12:02:18.324921 kernel: xor: using function: arm64_neon (26927 MB/sec) Jan 17 12:02:18.374527 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:02:18.385123 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:02:18.397736 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:02:18.408266 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jan 17 12:02:18.411366 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:02:18.414732 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:02:18.428213 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jan 17 12:02:18.451965 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:02:18.459703 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:02:18.497552 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:02:18.508623 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:02:18.518376 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:02:18.519808 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:02:18.521603 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:02:18.523393 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:02:18.531735 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:02:18.540522 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 17 12:02:18.551606 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 12:02:18.551698 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:02:18.551710 kernel: GPT:9289727 != 19775487 Jan 17 12:02:18.551719 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:02:18.551728 kernel: GPT:9289727 != 19775487 Jan 17 12:02:18.551737 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:02:18.551746 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:02:18.541760 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:02:18.551557 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:02:18.551668 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:02:18.553840 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:02:18.554633 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:02:18.554764 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:02:18.556659 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:02:18.564735 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:02:18.572086 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (519) Jan 17 12:02:18.572125 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (514) Jan 17 12:02:18.573712 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:02:18.578889 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:02:18.586896 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:02:18.594426 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:02:18.598226 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:02:18.599108 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:02:18.614635 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:02:18.616538 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:02:18.620331 disk-uuid[548]: Primary Header is updated. Jan 17 12:02:18.620331 disk-uuid[548]: Secondary Entries is updated. Jan 17 12:02:18.620331 disk-uuid[548]: Secondary Header is updated. Jan 17 12:02:18.623519 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:02:18.642806 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:02:19.637529 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:02:19.637964 disk-uuid[549]: The operation has completed successfully. Jan 17 12:02:19.660844 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:02:19.660938 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:02:19.681662 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:02:19.684583 sh[572]: Success Jan 17 12:02:19.697527 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 12:02:19.727567 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:02:19.741846 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:02:19.743925 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:02:19.754954 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 12:02:19.754992 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:02:19.755003 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:02:19.756751 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:02:19.756765 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:02:19.760996 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:02:19.761866 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:02:19.762631 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:02:19.764882 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:02:19.775711 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:02:19.775758 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:02:19.775776 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:02:19.777524 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:02:19.784833 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:02:19.786116 kernel: BTRFS info (device vda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:02:19.791273 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:02:19.799661 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:02:19.856804 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:02:19.863680 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:02:19.890737 systemd-networkd[761]: lo: Link UP Jan 17 12:02:19.890749 systemd-networkd[761]: lo: Gained carrier Jan 17 12:02:19.891430 systemd-networkd[761]: Enumeration completed Jan 17 12:02:19.891939 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:02:19.891943 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:02:19.892667 systemd-networkd[761]: eth0: Link UP Jan 17 12:02:19.892670 systemd-networkd[761]: eth0: Gained carrier Jan 17 12:02:19.892676 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:02:19.894206 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:02:19.895137 systemd[1]: Reached target network.target - Network. Jan 17 12:02:19.904814 ignition[669]: Ignition 2.19.0 Jan 17 12:02:19.904824 ignition[669]: Stage: fetch-offline Jan 17 12:02:19.904861 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:02:19.904869 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:02:19.905021 ignition[669]: parsed url from cmdline: "" Jan 17 12:02:19.905024 ignition[669]: no config URL provided Jan 17 12:02:19.905029 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:02:19.905036 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:02:19.905060 ignition[669]: op(1): [started] loading QEMU firmware config module Jan 17 12:02:19.905065 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 12:02:19.911180 ignition[669]: op(1): [finished] loading QEMU firmware config module Jan 17 12:02:19.912557 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:02:19.949152 ignition[669]: parsing config with SHA512: 26cbcbf240f71f69f22899806c3df7df6f4e5aaaad73993b863f2e8b37d892837723a1cb0d9e56f338faf908ca0686314799f478c5ceea9262e297b799533a2f Jan 17 12:02:19.953225 unknown[669]: fetched base config from "system" Jan 17 12:02:19.953704 systemd-resolved[277]: Detected conflict on linux IN A 10.0.0.47 Jan 17 12:02:19.953712 systemd-resolved[277]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jan 17 12:02:19.954523 ignition[669]: fetch-offline: fetch-offline passed Jan 17 12:02:19.953786 unknown[669]: fetched user config from "qemu" Jan 17 12:02:19.954593 ignition[669]: Ignition finished successfully Jan 17 12:02:19.956128 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:02:19.957538 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:02:19.966655 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:02:19.977242 ignition[772]: Ignition 2.19.0 Jan 17 12:02:19.977252 ignition[772]: Stage: kargs Jan 17 12:02:19.977406 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:02:19.977415 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:02:19.978321 ignition[772]: kargs: kargs passed Jan 17 12:02:19.978362 ignition[772]: Ignition finished successfully Jan 17 12:02:19.981140 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:02:19.994650 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:02:20.004044 ignition[780]: Ignition 2.19.0 Jan 17 12:02:20.004054 ignition[780]: Stage: disks Jan 17 12:02:20.004209 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:02:20.004219 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:02:20.005125 ignition[780]: disks: disks passed Jan 17 12:02:20.006579 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:02:20.005172 ignition[780]: Ignition finished successfully Jan 17 12:02:20.007791 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:02:20.008951 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:02:20.011036 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:02:20.012336 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:02:20.013843 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:02:20.028653 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:02:20.037786 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:02:20.041300 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:02:20.043135 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:02:20.088520 kernel: EXT4-fs (vda9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 12:02:20.088907 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:02:20.089958 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:02:20.103599 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:02:20.105536 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:02:20.106367 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:02:20.106408 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:02:20.106431 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:02:20.112559 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Jan 17 12:02:20.112584 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:02:20.114471 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:02:20.114513 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:02:20.114633 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:02:20.116561 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:02:20.121514 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:02:20.123336 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:02:20.160721 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:02:20.163813 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:02:20.167355 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:02:20.170873 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:02:20.237153 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:02:20.244646 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:02:20.250045 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:02:20.253518 kernel: BTRFS info (device vda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:02:20.266396 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:02:20.272443 ignition[913]: INFO : Ignition 2.19.0 Jan 17 12:02:20.272443 ignition[913]: INFO : Stage: mount Jan 17 12:02:20.273709 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:02:20.273709 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:02:20.273709 ignition[913]: INFO : mount: mount passed Jan 17 12:02:20.273709 ignition[913]: INFO : Ignition finished successfully Jan 17 12:02:20.274941 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:02:20.285625 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:02:20.754128 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:02:20.762759 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:02:20.769069 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Jan 17 12:02:20.769106 kernel: BTRFS info (device vda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:02:20.769118 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:02:20.770521 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:02:20.772516 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:02:20.773469 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:02:20.789508 ignition[944]: INFO : Ignition 2.19.0 Jan 17 12:02:20.789508 ignition[944]: INFO : Stage: files Jan 17 12:02:20.790803 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:02:20.790803 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:02:20.790803 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:02:20.793622 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:02:20.793622 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:02:20.796616 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:02:20.797712 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:02:20.797712 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:02:20.797147 unknown[944]: wrote ssh authorized keys file for user: core Jan 17 12:02:20.800754 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:02:20.800754 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:02:20.800754 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:02:20.800754 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 12:02:20.824118 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:02:20.905817 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:02:20.905817 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:02:20.908639 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 17 12:02:21.199941 systemd-networkd[761]: eth0: Gained IPv6LL Jan 17 12:02:21.202035 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 17 12:02:21.276706 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:02:21.276706 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:02:21.279452 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:02:21.279452 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:02:21.279452 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:02:21.279452 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:02:21.279452 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:02:21.279452 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:02:21.279452 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:02:21.288755 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:02:21.288755 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:02:21.288755 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:02:21.288755 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:02:21.288755 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:02:21.288755 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 17 12:02:21.514363 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 17 12:02:21.675593 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:02:21.675593 ignition[944]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 17 12:02:21.678369 ignition[944]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:02:21.678369 ignition[944]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:02:21.678369 ignition[944]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 17 12:02:21.678369 ignition[944]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 17 12:02:21.678369 ignition[944]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:02:21.678369 ignition[944]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:02:21.678369 ignition[944]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 17 12:02:21.678369 ignition[944]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 17 12:02:21.678369 ignition[944]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:02:21.678369 ignition[944]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:02:21.678369 ignition[944]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 17 12:02:21.678369 ignition[944]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:02:21.699528 ignition[944]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:02:21.704268 ignition[944]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:02:21.705453 ignition[944]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:02:21.705453 ignition[944]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:02:21.705453 ignition[944]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:02:21.705453 ignition[944]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:02:21.705453 ignition[944]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:02:21.705453 ignition[944]: INFO : files: files passed Jan 17 12:02:21.705453 ignition[944]: INFO : Ignition finished successfully Jan 17 12:02:21.710576 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:02:21.737674 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:02:21.739831 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:02:21.741416 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:02:21.741515 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:02:21.746892 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 12:02:21.750154 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:02:21.750154 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:02:21.752593 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:02:21.752231 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:02:21.753802 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:02:21.762645 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:02:21.780657 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:02:21.781463 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:02:21.783623 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:02:21.784660 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:02:21.785995 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:02:21.801704 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:02:21.812534 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:02:21.817661 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:02:21.825641 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:02:21.826564 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:02:21.828166 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:02:21.829463 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:02:21.829584 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:02:21.831543 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:02:21.833070 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:02:21.834343 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:02:21.835604 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:02:21.837130 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:02:21.838548 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:02:21.840051 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:02:21.841555 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:02:21.843209 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:02:21.844492 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:02:21.845654 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:02:21.845777 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:02:21.847582 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:02:21.849020 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:02:21.850438 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:02:21.853579 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:02:21.854562 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:02:21.854674 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:02:21.856782 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:02:21.856909 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:02:21.858314 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:02:21.859474 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:02:21.863565 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:02:21.865837 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:02:21.866726 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:02:21.867944 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:02:21.868084 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:02:21.869194 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:02:21.869320 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:02:21.870395 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:02:21.870567 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:02:21.872033 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:02:21.872179 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:02:21.889065 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:02:21.897107 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:02:21.898700 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:02:21.899043 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:02:21.900648 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:02:21.900748 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:02:21.906439 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:02:21.907861 ignition[998]: INFO : Ignition 2.19.0 Jan 17 12:02:21.907861 ignition[998]: INFO : Stage: umount Jan 17 12:02:21.911232 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:02:21.911232 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:02:21.911232 ignition[998]: INFO : umount: umount passed Jan 17 12:02:21.911232 ignition[998]: INFO : Ignition finished successfully Jan 17 12:02:21.913495 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:02:21.913638 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:02:21.916576 systemd[1]: Stopped target network.target - Network. Jan 17 12:02:21.917630 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:02:21.917695 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:02:21.919545 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:02:21.919599 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:02:21.920842 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:02:21.920882 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:02:21.922791 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:02:21.922843 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:02:21.924635 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:02:21.925846 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:02:21.927512 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:02:21.927607 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:02:21.928888 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:02:21.928960 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:02:21.931745 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:02:21.931810 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:02:21.933431 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:02:21.933567 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:02:21.934589 systemd-networkd[761]: eth0: DHCPv6 lease lost Jan 17 12:02:21.935881 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:02:21.936684 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:02:21.938443 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:02:21.938490 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:02:21.950645 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:02:21.951366 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:02:21.951416 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:02:21.953153 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:02:21.953195 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:02:21.954483 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:02:21.954565 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:02:21.956474 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:02:21.956533 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:02:21.958065 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:02:21.967526 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:02:21.967654 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:02:21.973173 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:02:21.973307 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:02:21.975709 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:02:21.975754 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:02:21.976608 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:02:21.976639 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:02:21.978059 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:02:21.978102 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:02:21.980120 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:02:21.980161 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:02:21.982149 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:02:21.982190 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:02:21.995646 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:02:21.996489 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:02:21.996562 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:02:21.998266 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:02:21.998304 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:02:21.999874 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:02:21.999917 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:02:22.001601 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:02:22.001641 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:02:22.003407 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:02:22.003488 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:02:22.005282 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:02:22.007121 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:02:22.016950 systemd[1]: Switching root. Jan 17 12:02:22.053220 systemd-journald[236]: Journal stopped Jan 17 12:02:22.792022 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jan 17 12:02:22.792079 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:02:22.792092 kernel: SELinux: policy capability open_perms=1 Jan 17 12:02:22.792103 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:02:22.792115 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:02:22.792129 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:02:22.792139 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:02:22.792148 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:02:22.792158 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:02:22.792167 kernel: audit: type=1403 audit(1737115342.251:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:02:22.792178 systemd[1]: Successfully loaded SELinux policy in 33.175ms. Jan 17 12:02:22.792197 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.221ms. Jan 17 12:02:22.792210 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:02:22.792222 systemd[1]: Detected virtualization kvm. Jan 17 12:02:22.792233 systemd[1]: Detected architecture arm64. Jan 17 12:02:22.792244 systemd[1]: Detected first boot. Jan 17 12:02:22.792254 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:02:22.792265 zram_generator::config[1063]: No configuration found. Jan 17 12:02:22.792276 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:02:22.792287 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:02:22.792298 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:02:22.792310 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:02:22.792320 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:02:22.792331 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:02:22.792341 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:02:22.792353 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:02:22.792364 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:02:22.792374 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:02:22.792387 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:02:22.792397 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:02:22.792408 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:02:22.792419 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:02:22.792430 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:02:22.792440 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:02:22.792451 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:02:22.792461 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 12:02:22.792472 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:02:22.792482 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:02:22.792494 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:02:22.792519 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:02:22.792531 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:02:22.792542 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:02:22.792553 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:02:22.792563 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:02:22.792574 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:02:22.792586 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:02:22.792597 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:02:22.792608 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:02:22.792618 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:02:22.792630 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:02:22.792641 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:02:22.792651 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:02:22.792661 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:02:22.792672 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:02:22.792682 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:02:22.792694 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:02:22.792705 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:02:22.792716 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:02:22.792726 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:02:22.792737 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:02:22.792748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:02:22.792765 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:02:22.792778 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:02:22.792791 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:02:22.792802 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:02:22.792813 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:02:22.792824 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:02:22.792835 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:02:22.792845 kernel: fuse: init (API version 7.39) Jan 17 12:02:22.792855 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:02:22.792866 kernel: loop: module loaded Jan 17 12:02:22.792881 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:02:22.792893 kernel: ACPI: bus type drm_connector registered Jan 17 12:02:22.792903 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:02:22.792913 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:02:22.792923 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:02:22.792934 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:02:22.792944 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:02:22.792955 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:02:22.792966 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:02:22.792997 systemd-journald[1147]: Collecting audit messages is disabled. Jan 17 12:02:22.793021 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:02:22.793032 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:02:22.793043 systemd-journald[1147]: Journal started Jan 17 12:02:22.793064 systemd-journald[1147]: Runtime Journal (/run/log/journal/51b44ec5e8244f5f9ef960d12670d6d5) is 5.9M, max 47.3M, 41.4M free. Jan 17 12:02:22.794870 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:02:22.796549 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:02:22.797746 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:02:22.798973 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:02:22.799140 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:02:22.800246 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:02:22.800409 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:02:22.801540 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:02:22.801703 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:02:22.802721 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:02:22.802886 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:02:22.804087 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:02:22.804249 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:02:22.805319 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:02:22.805582 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:02:22.806728 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:02:22.808373 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:02:22.809674 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:02:22.821002 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:02:22.825677 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:02:22.829654 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:02:22.830487 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:02:22.832684 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:02:22.836759 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:02:22.837616 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:02:22.840801 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:02:22.841716 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:02:22.845075 systemd-journald[1147]: Time spent on flushing to /var/log/journal/51b44ec5e8244f5f9ef960d12670d6d5 is 22.655ms for 849 entries. Jan 17 12:02:22.845075 systemd-journald[1147]: System Journal (/var/log/journal/51b44ec5e8244f5f9ef960d12670d6d5) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:02:22.881069 systemd-journald[1147]: Received client request to flush runtime journal. Jan 17 12:02:22.846311 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:02:22.848688 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:02:22.851925 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:02:22.853068 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:02:22.853996 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:02:22.859251 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:02:22.860711 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:02:22.861892 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:02:22.871620 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:02:22.873158 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:02:22.882655 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:02:22.883918 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 17 12:02:22.883933 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 17 12:02:22.890432 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:02:22.902760 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:02:22.925453 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:02:22.935679 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:02:22.947591 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 17 12:02:22.947612 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 17 12:02:22.951367 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:02:23.277391 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:02:23.290668 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:02:23.313705 systemd-udevd[1222]: Using default interface naming scheme 'v255'. Jan 17 12:02:23.327462 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:02:23.339417 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:02:23.346715 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:02:23.353915 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 17 12:02:23.376715 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1223) Jan 17 12:02:23.403731 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:02:23.413906 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:02:23.456737 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:02:23.457355 systemd-networkd[1232]: lo: Link UP Jan 17 12:02:23.457359 systemd-networkd[1232]: lo: Gained carrier Jan 17 12:02:23.458084 systemd-networkd[1232]: Enumeration completed Jan 17 12:02:23.458901 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:02:23.461054 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:02:23.461355 systemd-networkd[1232]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:02:23.461359 systemd-networkd[1232]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:02:23.462155 systemd-networkd[1232]: eth0: Link UP Jan 17 12:02:23.462164 systemd-networkd[1232]: eth0: Gained carrier Jan 17 12:02:23.462179 systemd-networkd[1232]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:02:23.462232 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:02:23.465657 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:02:23.482557 systemd-networkd[1232]: eth0: DHCPv4 address 10.0.0.47/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:02:23.488938 lvm[1260]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:02:23.497088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:02:23.520985 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:02:23.522187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:02:23.533643 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:02:23.538194 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:02:23.563963 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:02:23.565091 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:02:23.566086 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:02:23.566128 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:02:23.566900 systemd[1]: Reached target machines.target - Containers. Jan 17 12:02:23.568662 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:02:23.577639 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:02:23.579584 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:02:23.580572 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:02:23.583700 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:02:23.585600 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:02:23.589062 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:02:23.592635 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:02:23.597563 kernel: loop0: detected capacity change from 0 to 114328 Jan 17 12:02:23.599479 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:02:23.610423 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:02:23.610624 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:02:23.611150 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:02:23.651538 kernel: loop1: detected capacity change from 0 to 194512 Jan 17 12:02:23.689531 kernel: loop2: detected capacity change from 0 to 114432 Jan 17 12:02:23.725527 kernel: loop3: detected capacity change from 0 to 114328 Jan 17 12:02:23.729524 kernel: loop4: detected capacity change from 0 to 194512 Jan 17 12:02:23.735520 kernel: loop5: detected capacity change from 0 to 114432 Jan 17 12:02:23.737615 (sd-merge)[1290]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 12:02:23.738000 (sd-merge)[1290]: Merged extensions into '/usr'. Jan 17 12:02:23.741758 systemd[1]: Reloading requested from client PID 1276 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:02:23.741807 systemd[1]: Reloading... Jan 17 12:02:23.786558 zram_generator::config[1320]: No configuration found. Jan 17 12:02:23.822236 ldconfig[1272]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:02:23.889059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:02:23.936244 systemd[1]: Reloading finished in 194 ms. Jan 17 12:02:23.951369 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:02:23.952607 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:02:23.967642 systemd[1]: Starting ensure-sysext.service... Jan 17 12:02:23.969274 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:02:23.974082 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:02:23.974096 systemd[1]: Reloading... Jan 17 12:02:23.984573 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:02:23.984850 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:02:23.985457 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:02:23.985710 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jan 17 12:02:23.985775 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jan 17 12:02:23.987819 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:02:23.987830 systemd-tmpfiles[1361]: Skipping /boot Jan 17 12:02:23.994447 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:02:23.994462 systemd-tmpfiles[1361]: Skipping /boot Jan 17 12:02:24.010516 zram_generator::config[1390]: No configuration found. Jan 17 12:02:24.102442 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:02:24.149728 systemd[1]: Reloading finished in 175 ms. Jan 17 12:02:24.165494 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:02:24.186237 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:02:24.188831 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:02:24.190930 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:02:24.193649 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:02:24.197921 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:02:24.208556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:02:24.212439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:02:24.216663 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:02:24.221666 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:02:24.222573 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:02:24.223384 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:02:24.223575 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:02:24.226143 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:02:24.229175 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:02:24.229324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:02:24.230980 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:02:24.231827 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:02:24.239734 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:02:24.239988 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:02:24.244851 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:02:24.248242 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:02:24.249803 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:02:24.257814 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:02:24.261081 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:02:24.262727 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:02:24.264744 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:02:24.267626 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:02:24.268710 augenrules[1473]: No rules Jan 17 12:02:24.277776 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:02:24.277943 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:02:24.279423 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:02:24.280771 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:02:24.282331 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:02:24.282485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:02:24.283892 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:02:24.284107 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:02:24.293555 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:02:24.295907 systemd-resolved[1436]: Positive Trust Anchors: Jan 17 12:02:24.295926 systemd-resolved[1436]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:02:24.295958 systemd-resolved[1436]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:02:24.303421 systemd-resolved[1436]: Defaulting to hostname 'linux'. Jan 17 12:02:24.303767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:02:24.305634 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:02:24.307344 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:02:24.309716 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:02:24.311582 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:02:24.311719 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:02:24.312416 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:02:24.315188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:02:24.315342 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:02:24.316801 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:02:24.316942 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:02:24.318115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:02:24.318259 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:02:24.319576 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:02:24.319777 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:02:24.322598 systemd[1]: Finished ensure-sysext.service. Jan 17 12:02:24.326912 systemd[1]: Reached target network.target - Network. Jan 17 12:02:24.327599 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:02:24.328448 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:02:24.328509 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:02:24.338694 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:02:24.380831 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:02:24.381579 systemd-timesyncd[1504]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 12:02:24.381627 systemd-timesyncd[1504]: Initial clock synchronization to Fri 2025-01-17 12:02:24.468226 UTC. Jan 17 12:02:24.382119 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:02:24.382972 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:02:24.383889 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:02:24.384777 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:02:24.385652 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:02:24.385681 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:02:24.386310 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:02:24.387192 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:02:24.388065 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:02:24.388968 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:02:24.390071 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:02:24.392282 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:02:24.394168 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:02:24.401522 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:02:24.402309 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:02:24.403066 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:02:24.403879 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:02:24.403926 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:02:24.403954 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:02:24.405082 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:02:24.406867 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:02:24.408485 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:02:24.411665 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:02:24.412588 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:02:24.415673 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:02:24.417337 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:02:24.423944 jq[1510]: false Jan 17 12:02:24.422195 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:02:24.430732 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:02:24.433534 extend-filesystems[1511]: Found loop3 Jan 17 12:02:24.433534 extend-filesystems[1511]: Found loop4 Jan 17 12:02:24.435029 extend-filesystems[1511]: Found loop5 Jan 17 12:02:24.435029 extend-filesystems[1511]: Found vda Jan 17 12:02:24.435029 extend-filesystems[1511]: Found vda1 Jan 17 12:02:24.435029 extend-filesystems[1511]: Found vda2 Jan 17 12:02:24.435029 extend-filesystems[1511]: Found vda3 Jan 17 12:02:24.435029 extend-filesystems[1511]: Found usr Jan 17 12:02:24.435029 extend-filesystems[1511]: Found vda4 Jan 17 12:02:24.435029 extend-filesystems[1511]: Found vda6 Jan 17 12:02:24.435029 extend-filesystems[1511]: Found vda7 Jan 17 12:02:24.435029 extend-filesystems[1511]: Found vda9 Jan 17 12:02:24.435029 extend-filesystems[1511]: Checking size of /dev/vda9 Jan 17 12:02:24.444336 dbus-daemon[1509]: [system] SELinux support is enabled Jan 17 12:02:24.437699 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:02:24.446990 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:02:24.448257 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:02:24.451525 extend-filesystems[1511]: Resized partition /dev/vda9 Jan 17 12:02:24.452286 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:02:24.454399 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:02:24.459690 extend-filesystems[1533]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:02:24.460139 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:02:24.460357 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:02:24.461577 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:02:24.461796 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:02:24.465171 jq[1531]: true Jan 17 12:02:24.469891 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 12:02:24.480584 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1223) Jan 17 12:02:24.478013 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:02:24.486234 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:02:24.486289 (ntainerd)[1541]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:02:24.504000 update_engine[1528]: I20250117 12:02:24.503335 1528 main.cc:92] Flatcar Update Engine starting Jan 17 12:02:24.505731 jq[1545]: true Jan 17 12:02:24.520242 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 12:02:24.520283 update_engine[1528]: I20250117 12:02:24.511879 1528 update_check_scheduler.cc:74] Next update check in 10m34s Jan 17 12:02:24.520334 tar[1538]: linux-arm64/helm Jan 17 12:02:24.524072 extend-filesystems[1533]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:02:24.524072 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:02:24.524072 extend-filesystems[1533]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 12:02:24.531189 extend-filesystems[1511]: Resized filesystem in /dev/vda9 Jan 17 12:02:24.526482 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:02:24.526782 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:02:24.536557 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:02:24.537755 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:02:24.537791 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:02:24.538975 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:02:24.538998 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:02:24.540688 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:02:24.545268 systemd-logind[1522]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 12:02:24.545870 systemd-logind[1522]: New seat seat0. Jan 17 12:02:24.550638 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:02:24.551572 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:02:24.567281 bash[1574]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:02:24.568181 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:02:24.570818 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:02:24.590462 locksmithd[1573]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:02:24.690403 containerd[1541]: time="2025-01-17T12:02:24.690112760Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:02:24.725543 containerd[1541]: time="2025-01-17T12:02:24.725215600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:02:24.727973 containerd[1541]: time="2025-01-17T12:02:24.726637120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:02:24.727973 containerd[1541]: time="2025-01-17T12:02:24.726669480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:02:24.727973 containerd[1541]: time="2025-01-17T12:02:24.726684880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:02:24.727973 containerd[1541]: time="2025-01-17T12:02:24.726852960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:02:24.727973 containerd[1541]: time="2025-01-17T12:02:24.726871040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:02:24.727973 containerd[1541]: time="2025-01-17T12:02:24.726925880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:02:24.727973 containerd[1541]: time="2025-01-17T12:02:24.726938720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:02:24.727973 containerd[1541]: time="2025-01-17T12:02:24.727144360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:02:24.727973 containerd[1541]: time="2025-01-17T12:02:24.727159440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:02:24.727973 containerd[1541]: time="2025-01-17T12:02:24.727172680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:02:24.727973 containerd[1541]: time="2025-01-17T12:02:24.727183360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:02:24.728213 containerd[1541]: time="2025-01-17T12:02:24.727252440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:02:24.728213 containerd[1541]: time="2025-01-17T12:02:24.727433760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:02:24.728213 containerd[1541]: time="2025-01-17T12:02:24.727583360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:02:24.728213 containerd[1541]: time="2025-01-17T12:02:24.727598880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:02:24.728213 containerd[1541]: time="2025-01-17T12:02:24.727675280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:02:24.728213 containerd[1541]: time="2025-01-17T12:02:24.727717320Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:02:24.731775 containerd[1541]: time="2025-01-17T12:02:24.731749120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:02:24.731895 containerd[1541]: time="2025-01-17T12:02:24.731877480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:02:24.731969 containerd[1541]: time="2025-01-17T12:02:24.731956560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:02:24.732050 containerd[1541]: time="2025-01-17T12:02:24.732037040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:02:24.732185 containerd[1541]: time="2025-01-17T12:02:24.732161800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:02:24.732411 containerd[1541]: time="2025-01-17T12:02:24.732394640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:02:24.733150 containerd[1541]: time="2025-01-17T12:02:24.733126640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:02:24.733465 containerd[1541]: time="2025-01-17T12:02:24.733445440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:02:24.733634 containerd[1541]: time="2025-01-17T12:02:24.733606120Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:02:24.733711 containerd[1541]: time="2025-01-17T12:02:24.733685200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:02:24.733807 containerd[1541]: time="2025-01-17T12:02:24.733792000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:02:24.733933 containerd[1541]: time="2025-01-17T12:02:24.733916400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:02:24.734003 containerd[1541]: time="2025-01-17T12:02:24.733991280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:02:24.734144 containerd[1541]: time="2025-01-17T12:02:24.734078520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:02:24.734213 containerd[1541]: time="2025-01-17T12:02:24.734194680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:02:24.734271 containerd[1541]: time="2025-01-17T12:02:24.734259360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:02:24.734396 containerd[1541]: time="2025-01-17T12:02:24.734380200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:02:24.734467 containerd[1541]: time="2025-01-17T12:02:24.734454760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:02:24.734539 containerd[1541]: time="2025-01-17T12:02:24.734526800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.734659 containerd[1541]: time="2025-01-17T12:02:24.734632400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.734760 containerd[1541]: time="2025-01-17T12:02:24.734718200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.734895 containerd[1541]: time="2025-01-17T12:02:24.734871840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.734964 containerd[1541]: time="2025-01-17T12:02:24.734945360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.735027 containerd[1541]: time="2025-01-17T12:02:24.735014400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.736169 containerd[1541]: time="2025-01-17T12:02:24.736138920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.736339 containerd[1541]: time="2025-01-17T12:02:24.736256000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.736426 containerd[1541]: time="2025-01-17T12:02:24.736412280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.736572 containerd[1541]: time="2025-01-17T12:02:24.736489480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.736648 containerd[1541]: time="2025-01-17T12:02:24.736624640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.736702 containerd[1541]: time="2025-01-17T12:02:24.736689560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.736825 containerd[1541]: time="2025-01-17T12:02:24.736802800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.736894 containerd[1541]: time="2025-01-17T12:02:24.736882640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:02:24.736968 containerd[1541]: time="2025-01-17T12:02:24.736957000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.737098 containerd[1541]: time="2025-01-17T12:02:24.737082560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.737162 containerd[1541]: time="2025-01-17T12:02:24.737152280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:02:24.737546 containerd[1541]: time="2025-01-17T12:02:24.737527360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:02:24.737727 containerd[1541]: time="2025-01-17T12:02:24.737706240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:02:24.737918 containerd[1541]: time="2025-01-17T12:02:24.737899680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:02:24.738021 containerd[1541]: time="2025-01-17T12:02:24.738006240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:02:24.738143 containerd[1541]: time="2025-01-17T12:02:24.738127200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.738225 containerd[1541]: time="2025-01-17T12:02:24.738212240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:02:24.738332 containerd[1541]: time="2025-01-17T12:02:24.738261800Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:02:24.738392 containerd[1541]: time="2025-01-17T12:02:24.738379120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:02:24.738985 containerd[1541]: time="2025-01-17T12:02:24.738912000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:02:24.739228 containerd[1541]: time="2025-01-17T12:02:24.739210840Z" level=info msg="Connect containerd service" Jan 17 12:02:24.739367 containerd[1541]: time="2025-01-17T12:02:24.739352160Z" level=info msg="using legacy CRI server" Jan 17 12:02:24.739434 containerd[1541]: time="2025-01-17T12:02:24.739422800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:02:24.739676 containerd[1541]: time="2025-01-17T12:02:24.739658760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:02:24.740803 containerd[1541]: time="2025-01-17T12:02:24.740779080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:02:24.741206 containerd[1541]: time="2025-01-17T12:02:24.741094680Z" level=info msg="Start subscribing containerd event" Jan 17 12:02:24.741206 containerd[1541]: time="2025-01-17T12:02:24.741158680Z" level=info msg="Start recovering state" Jan 17 12:02:24.741333 containerd[1541]: time="2025-01-17T12:02:24.741226960Z" level=info msg="Start event monitor" Jan 17 12:02:24.741333 containerd[1541]: time="2025-01-17T12:02:24.741241320Z" level=info msg="Start snapshots syncer" Jan 17 12:02:24.741333 containerd[1541]: time="2025-01-17T12:02:24.741250240Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:02:24.741333 containerd[1541]: time="2025-01-17T12:02:24.741260800Z" level=info msg="Start streaming server" Jan 17 12:02:24.741790 containerd[1541]: time="2025-01-17T12:02:24.741772320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:02:24.742061 containerd[1541]: time="2025-01-17T12:02:24.742042760Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:02:24.746251 containerd[1541]: time="2025-01-17T12:02:24.744652120Z" level=info msg="containerd successfully booted in 0.056041s" Jan 17 12:02:24.744791 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:02:24.869123 tar[1538]: linux-arm64/LICENSE Jan 17 12:02:24.869542 tar[1538]: linux-arm64/README.md Jan 17 12:02:24.879874 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:02:25.424324 systemd-networkd[1232]: eth0: Gained IPv6LL Jan 17 12:02:25.430268 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:02:25.431800 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:02:25.440722 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:02:25.443356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:25.445405 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:02:25.470885 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:02:25.472176 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:02:25.472409 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:02:25.474065 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:02:25.834550 sshd_keygen[1529]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:02:25.852925 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:02:25.863755 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:02:25.868758 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:02:25.868992 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:02:25.872190 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:02:25.884155 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:02:25.886945 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:02:25.889019 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 12:02:25.890288 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:02:25.931685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:25.933062 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:02:25.935573 systemd[1]: Startup finished in 5.077s (kernel) + 3.721s (userspace) = 8.799s. Jan 17 12:02:25.935580 (kubelet)[1647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:02:26.449041 kubelet[1647]: E0117 12:02:26.448960 1647 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:02:26.452129 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:02:26.452334 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:02:30.956299 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:02:30.969768 systemd[1]: Started sshd@0-10.0.0.47:22-10.0.0.1:46594.service - OpenSSH per-connection server daemon (10.0.0.1:46594). Jan 17 12:02:31.021755 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 46594 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:02:31.023704 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:31.032044 systemd-logind[1522]: New session 1 of user core. Jan 17 12:02:31.032947 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:02:31.041742 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:02:31.050927 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:02:31.052901 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:02:31.058918 (systemd)[1667]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:02:31.130241 systemd[1667]: Queued start job for default target default.target. Jan 17 12:02:31.130596 systemd[1667]: Created slice app.slice - User Application Slice. Jan 17 12:02:31.130621 systemd[1667]: Reached target paths.target - Paths. Jan 17 12:02:31.130634 systemd[1667]: Reached target timers.target - Timers. Jan 17 12:02:31.144622 systemd[1667]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:02:31.150123 systemd[1667]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:02:31.150183 systemd[1667]: Reached target sockets.target - Sockets. Jan 17 12:02:31.150195 systemd[1667]: Reached target basic.target - Basic System. Jan 17 12:02:31.150231 systemd[1667]: Reached target default.target - Main User Target. Jan 17 12:02:31.150255 systemd[1667]: Startup finished in 86ms. Jan 17 12:02:31.150534 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:02:31.151831 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:02:31.214243 systemd[1]: Started sshd@1-10.0.0.47:22-10.0.0.1:46602.service - OpenSSH per-connection server daemon (10.0.0.1:46602). Jan 17 12:02:31.245855 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 46602 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:02:31.247072 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:31.251103 systemd-logind[1522]: New session 2 of user core. Jan 17 12:02:31.261791 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:02:31.313013 sshd[1679]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:31.321773 systemd[1]: Started sshd@2-10.0.0.47:22-10.0.0.1:46606.service - OpenSSH per-connection server daemon (10.0.0.1:46606). Jan 17 12:02:31.322139 systemd[1]: sshd@1-10.0.0.47:22-10.0.0.1:46602.service: Deactivated successfully. Jan 17 12:02:31.323906 systemd-logind[1522]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:02:31.324451 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:02:31.325395 systemd-logind[1522]: Removed session 2. Jan 17 12:02:31.353143 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 46606 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:02:31.354337 sshd[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:31.358543 systemd-logind[1522]: New session 3 of user core. Jan 17 12:02:31.366839 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:02:31.414906 sshd[1684]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:31.428748 systemd[1]: Started sshd@3-10.0.0.47:22-10.0.0.1:46620.service - OpenSSH per-connection server daemon (10.0.0.1:46620). Jan 17 12:02:31.429124 systemd[1]: sshd@2-10.0.0.47:22-10.0.0.1:46606.service: Deactivated successfully. Jan 17 12:02:31.430941 systemd-logind[1522]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:02:31.431475 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:02:31.432970 systemd-logind[1522]: Removed session 3. Jan 17 12:02:31.460518 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 46620 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:02:31.461760 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:31.465731 systemd-logind[1522]: New session 4 of user core. Jan 17 12:02:31.476760 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:02:31.528265 sshd[1692]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:31.536870 systemd[1]: Started sshd@4-10.0.0.47:22-10.0.0.1:46634.service - OpenSSH per-connection server daemon (10.0.0.1:46634). Jan 17 12:02:31.537245 systemd[1]: sshd@3-10.0.0.47:22-10.0.0.1:46620.service: Deactivated successfully. Jan 17 12:02:31.539576 systemd-logind[1522]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:02:31.539851 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:02:31.541247 systemd-logind[1522]: Removed session 4. Jan 17 12:02:31.568743 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 46634 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:02:31.569926 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:31.574347 systemd-logind[1522]: New session 5 of user core. Jan 17 12:02:31.583732 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:02:31.648454 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:02:31.650530 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:02:31.673456 sudo[1707]: pam_unix(sudo:session): session closed for user root Jan 17 12:02:31.675436 sshd[1700]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:31.685934 systemd[1]: Started sshd@5-10.0.0.47:22-10.0.0.1:46648.service - OpenSSH per-connection server daemon (10.0.0.1:46648). Jan 17 12:02:31.686735 systemd[1]: sshd@4-10.0.0.47:22-10.0.0.1:46634.service: Deactivated successfully. Jan 17 12:02:31.688356 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:02:31.689054 systemd-logind[1522]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:02:31.690468 systemd-logind[1522]: Removed session 5. Jan 17 12:02:31.718478 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 46648 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:02:31.719649 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:31.724564 systemd-logind[1522]: New session 6 of user core. Jan 17 12:02:31.729775 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:02:31.781457 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:02:31.781765 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:02:31.784751 sudo[1717]: pam_unix(sudo:session): session closed for user root Jan 17 12:02:31.789205 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:02:31.789468 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:02:31.805914 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:02:31.807421 auditctl[1720]: No rules Jan 17 12:02:31.807828 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:02:31.808071 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:02:31.811679 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:02:31.837723 augenrules[1739]: No rules Jan 17 12:02:31.839013 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:02:31.840519 sudo[1716]: pam_unix(sudo:session): session closed for user root Jan 17 12:02:31.842556 sshd[1709]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:31.854804 systemd[1]: Started sshd@6-10.0.0.47:22-10.0.0.1:46656.service - OpenSSH per-connection server daemon (10.0.0.1:46656). Jan 17 12:02:31.855197 systemd[1]: sshd@5-10.0.0.47:22-10.0.0.1:46648.service: Deactivated successfully. Jan 17 12:02:31.857099 systemd-logind[1522]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:02:31.857671 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:02:31.858621 systemd-logind[1522]: Removed session 6. Jan 17 12:02:31.886832 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 46656 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:02:31.888053 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:31.892063 systemd-logind[1522]: New session 7 of user core. Jan 17 12:02:31.909790 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:02:31.961204 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:02:31.961489 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:02:32.264894 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:02:32.264896 (dockerd)[1773]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:02:32.557118 dockerd[1773]: time="2025-01-17T12:02:32.555378260Z" level=info msg="Starting up" Jan 17 12:02:32.803613 dockerd[1773]: time="2025-01-17T12:02:32.803555917Z" level=info msg="Loading containers: start." Jan 17 12:02:32.894555 kernel: Initializing XFRM netlink socket Jan 17 12:02:32.960668 systemd-networkd[1232]: docker0: Link UP Jan 17 12:02:32.980779 dockerd[1773]: time="2025-01-17T12:02:32.980732959Z" level=info msg="Loading containers: done." Jan 17 12:02:32.994665 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3527378601-merged.mount: Deactivated successfully. Jan 17 12:02:32.995110 dockerd[1773]: time="2025-01-17T12:02:32.995065701Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:02:32.995191 dockerd[1773]: time="2025-01-17T12:02:32.995154598Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:02:32.995285 dockerd[1773]: time="2025-01-17T12:02:32.995260286Z" level=info msg="Daemon has completed initialization" Jan 17 12:02:33.021143 dockerd[1773]: time="2025-01-17T12:02:33.021019778Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:02:33.021459 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:02:33.631814 containerd[1541]: time="2025-01-17T12:02:33.631771104Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:02:34.353232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount969465983.mount: Deactivated successfully. Jan 17 12:02:35.609566 containerd[1541]: time="2025-01-17T12:02:35.609183418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:35.612696 containerd[1541]: time="2025-01-17T12:02:35.612661633Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=32202459" Jan 17 12:02:35.613823 containerd[1541]: time="2025-01-17T12:02:35.613778290Z" level=info msg="ImageCreate event name:\"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:35.616872 containerd[1541]: time="2025-01-17T12:02:35.616837915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:35.618165 containerd[1541]: time="2025-01-17T12:02:35.618116231Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"32199257\" in 1.986305348s" Jan 17 12:02:35.618165 containerd[1541]: time="2025-01-17T12:02:35.618155303Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\"" Jan 17 12:02:35.635920 containerd[1541]: time="2025-01-17T12:02:35.635866891Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:02:36.702529 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:02:36.708817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:36.793958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:36.798803 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:02:36.841591 kubelet[2004]: E0117 12:02:36.841461 2004 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:02:36.845021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:02:36.845770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:02:37.212129 containerd[1541]: time="2025-01-17T12:02:37.212064573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:37.212919 containerd[1541]: time="2025-01-17T12:02:37.212873776Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=29381104" Jan 17 12:02:37.213589 containerd[1541]: time="2025-01-17T12:02:37.213553658Z" level=info msg="ImageCreate event name:\"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:37.217006 containerd[1541]: time="2025-01-17T12:02:37.216967539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:37.218223 containerd[1541]: time="2025-01-17T12:02:37.218184953Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"30784892\" in 1.582285295s" Jan 17 12:02:37.218259 containerd[1541]: time="2025-01-17T12:02:37.218222555Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\"" Jan 17 12:02:37.237064 containerd[1541]: time="2025-01-17T12:02:37.237027783Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:02:38.434183 containerd[1541]: time="2025-01-17T12:02:38.434128694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:38.434697 containerd[1541]: time="2025-01-17T12:02:38.434661230Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=15765674" Jan 17 12:02:38.435462 containerd[1541]: time="2025-01-17T12:02:38.435432101Z" level=info msg="ImageCreate event name:\"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:38.438361 containerd[1541]: time="2025-01-17T12:02:38.438300092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:38.441924 containerd[1541]: time="2025-01-17T12:02:38.441673888Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"17169480\" in 1.20460562s" Jan 17 12:02:38.441924 containerd[1541]: time="2025-01-17T12:02:38.441722261Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\"" Jan 17 12:02:38.462023 containerd[1541]: time="2025-01-17T12:02:38.461992530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:02:39.488541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356134707.mount: Deactivated successfully. Jan 17 12:02:39.835426 containerd[1541]: time="2025-01-17T12:02:39.835304983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:39.836973 containerd[1541]: time="2025-01-17T12:02:39.836770069Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=25274684" Jan 17 12:02:39.837888 containerd[1541]: time="2025-01-17T12:02:39.837822346Z" level=info msg="ImageCreate event name:\"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:39.839905 containerd[1541]: time="2025-01-17T12:02:39.839853938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:39.840596 containerd[1541]: time="2025-01-17T12:02:39.840453338Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"25273701\" in 1.378310605s" Jan 17 12:02:39.840596 containerd[1541]: time="2025-01-17T12:02:39.840490480Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\"" Jan 17 12:02:39.858393 containerd[1541]: time="2025-01-17T12:02:39.858357952Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:02:40.629484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3940595436.mount: Deactivated successfully. Jan 17 12:02:41.367907 containerd[1541]: time="2025-01-17T12:02:41.367862323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:41.368925 containerd[1541]: time="2025-01-17T12:02:41.368578879Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 17 12:02:41.369670 containerd[1541]: time="2025-01-17T12:02:41.369610238Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:41.374695 containerd[1541]: time="2025-01-17T12:02:41.373390633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:41.374695 containerd[1541]: time="2025-01-17T12:02:41.374547352Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.516150859s" Jan 17 12:02:41.374695 containerd[1541]: time="2025-01-17T12:02:41.374579713Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 17 12:02:41.392649 containerd[1541]: time="2025-01-17T12:02:41.392612333Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:02:41.826604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329310108.mount: Deactivated successfully. Jan 17 12:02:41.831347 containerd[1541]: time="2025-01-17T12:02:41.831302572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:41.831883 containerd[1541]: time="2025-01-17T12:02:41.831850874Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 17 12:02:41.832584 containerd[1541]: time="2025-01-17T12:02:41.832538713Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:41.834758 containerd[1541]: time="2025-01-17T12:02:41.834715297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:41.835607 containerd[1541]: time="2025-01-17T12:02:41.835566946Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 442.912398ms" Jan 17 12:02:41.835655 containerd[1541]: time="2025-01-17T12:02:41.835606677Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 17 12:02:41.853678 containerd[1541]: time="2025-01-17T12:02:41.853639377Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:02:42.452872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3456278511.mount: Deactivated successfully. Jan 17 12:02:44.806392 containerd[1541]: time="2025-01-17T12:02:44.806336312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:44.806870 containerd[1541]: time="2025-01-17T12:02:44.806824530Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jan 17 12:02:44.808337 containerd[1541]: time="2025-01-17T12:02:44.808302957Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:44.811923 containerd[1541]: time="2025-01-17T12:02:44.811885148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:44.813062 containerd[1541]: time="2025-01-17T12:02:44.813024404Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.959346181s" Jan 17 12:02:44.813100 containerd[1541]: time="2025-01-17T12:02:44.813063357Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 17 12:02:46.987032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:02:46.996663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:47.080626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:47.084138 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:02:47.120313 kubelet[2235]: E0117 12:02:47.120254 2235 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:02:47.122567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:02:47.122703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:02:49.596148 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:49.615914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:49.632243 systemd[1]: Reloading requested from client PID 2252 ('systemctl') (unit session-7.scope)... Jan 17 12:02:49.632261 systemd[1]: Reloading... Jan 17 12:02:49.693821 zram_generator::config[2291]: No configuration found. Jan 17 12:02:49.868823 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:02:49.923668 systemd[1]: Reloading finished in 291 ms. Jan 17 12:02:49.964473 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:02:49.964575 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:02:49.964847 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:49.966992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:50.058220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:50.062304 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:02:50.112900 kubelet[2349]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:50.112900 kubelet[2349]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:02:50.112900 kubelet[2349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:50.113230 kubelet[2349]: I0117 12:02:50.112950 2349 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:02:51.275136 kubelet[2349]: I0117 12:02:51.275104 2349 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:02:51.276557 kubelet[2349]: I0117 12:02:51.275515 2349 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:02:51.276557 kubelet[2349]: I0117 12:02:51.275755 2349 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:02:51.306270 kubelet[2349]: E0117 12:02:51.306249 2349 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:51.307176 kubelet[2349]: I0117 12:02:51.307054 2349 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:02:51.315836 kubelet[2349]: I0117 12:02:51.315809 2349 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:02:51.316180 kubelet[2349]: I0117 12:02:51.316167 2349 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:02:51.316354 kubelet[2349]: I0117 12:02:51.316341 2349 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:02:51.316442 kubelet[2349]: I0117 12:02:51.316363 2349 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:02:51.316442 kubelet[2349]: I0117 12:02:51.316372 2349 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:02:51.317456 kubelet[2349]: I0117 12:02:51.317437 2349 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:51.321192 kubelet[2349]: I0117 12:02:51.321170 2349 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:02:51.321192 kubelet[2349]: I0117 12:02:51.321195 2349 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:02:51.321274 kubelet[2349]: I0117 12:02:51.321215 2349 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:02:51.321274 kubelet[2349]: I0117 12:02:51.321229 2349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:02:51.321274 kubelet[2349]: W0117 12:02:51.321694 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:51.321274 kubelet[2349]: E0117 12:02:51.321739 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:51.322539 kubelet[2349]: W0117 12:02:51.322474 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:51.322589 kubelet[2349]: E0117 12:02:51.322544 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:51.323332 kubelet[2349]: I0117 12:02:51.323305 2349 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:02:51.324560 kubelet[2349]: I0117 12:02:51.324534 2349 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:02:51.325347 kubelet[2349]: W0117 12:02:51.325316 2349 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:02:51.326122 kubelet[2349]: I0117 12:02:51.326089 2349 server.go:1256] "Started kubelet" Jan 17 12:02:51.326439 kubelet[2349]: I0117 12:02:51.326409 2349 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:02:51.326643 kubelet[2349]: I0117 12:02:51.326616 2349 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:02:51.326890 kubelet[2349]: I0117 12:02:51.326831 2349 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:02:51.327187 kubelet[2349]: I0117 12:02:51.327163 2349 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:02:51.331852 kubelet[2349]: I0117 12:02:51.329009 2349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:02:51.331852 kubelet[2349]: I0117 12:02:51.330368 2349 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:02:51.331852 kubelet[2349]: I0117 12:02:51.330447 2349 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:02:51.331852 kubelet[2349]: I0117 12:02:51.330521 2349 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:02:51.331852 kubelet[2349]: W0117 12:02:51.330789 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:51.331852 kubelet[2349]: E0117 12:02:51.330825 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:51.331852 kubelet[2349]: E0117 12:02:51.331036 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="200ms" Jan 17 12:02:51.333893 kubelet[2349]: I0117 12:02:51.333839 2349 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:02:51.333949 kubelet[2349]: I0117 12:02:51.333926 2349 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:02:51.334732 kubelet[2349]: E0117 12:02:51.334711 2349 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.47:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.47:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b793a01c2dff1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:02:51.326070769 +0000 UTC m=+1.260252172,LastTimestamp:2025-01-17 12:02:51.326070769 +0000 UTC m=+1.260252172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:02:51.334968 kubelet[2349]: E0117 12:02:51.334944 2349 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:02:51.335211 kubelet[2349]: I0117 12:02:51.335179 2349 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:02:51.341929 kubelet[2349]: I0117 12:02:51.341821 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:02:51.343027 kubelet[2349]: I0117 12:02:51.342731 2349 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:02:51.343027 kubelet[2349]: I0117 12:02:51.342750 2349 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:02:51.343027 kubelet[2349]: I0117 12:02:51.342765 2349 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:02:51.343027 kubelet[2349]: E0117 12:02:51.342808 2349 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:02:51.348415 kubelet[2349]: W0117 12:02:51.348372 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:51.348667 kubelet[2349]: E0117 12:02:51.348642 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:51.356443 kubelet[2349]: I0117 12:02:51.356415 2349 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:02:51.356443 kubelet[2349]: I0117 12:02:51.356437 2349 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:02:51.356555 kubelet[2349]: I0117 12:02:51.356455 2349 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:51.431795 kubelet[2349]: I0117 12:02:51.431769 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:02:51.432325 kubelet[2349]: E0117 12:02:51.432303 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jan 17 12:02:51.443573 kubelet[2349]: E0117 12:02:51.443536 2349 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:02:51.455952 kubelet[2349]: I0117 12:02:51.455923 2349 policy_none.go:49] "None policy: Start" Jan 17 12:02:51.456696 kubelet[2349]: I0117 12:02:51.456672 2349 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:02:51.456804 kubelet[2349]: I0117 12:02:51.456724 2349 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:02:51.461312 kubelet[2349]: I0117 12:02:51.460666 2349 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:02:51.461312 kubelet[2349]: I0117 12:02:51.460905 2349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:02:51.462306 kubelet[2349]: E0117 12:02:51.462275 2349 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 12:02:51.532080 kubelet[2349]: E0117 12:02:51.531996 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="400ms" Jan 17 12:02:51.634308 kubelet[2349]: I0117 12:02:51.634271 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:02:51.634637 kubelet[2349]: E0117 12:02:51.634611 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jan 17 12:02:51.643718 kubelet[2349]: I0117 12:02:51.643681 2349 topology_manager.go:215] "Topology Admit Handler" podUID="dd466de870bdf0e573d7965dbd759acf" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:02:51.644508 kubelet[2349]: I0117 12:02:51.644482 2349 topology_manager.go:215] "Topology Admit Handler" podUID="605dd245551545e29d4e79fb03fd341e" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:02:51.645176 kubelet[2349]: I0117 12:02:51.645145 2349 topology_manager.go:215] "Topology Admit Handler" podUID="7d3649c826cdee988de058cf2e658738" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:02:51.733268 kubelet[2349]: I0117 12:02:51.733239 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:02:51.733348 kubelet[2349]: I0117 12:02:51.733277 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605dd245551545e29d4e79fb03fd341e-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"605dd245551545e29d4e79fb03fd341e\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:02:51.733348 kubelet[2349]: I0117 12:02:51.733303 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d3649c826cdee988de058cf2e658738-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7d3649c826cdee988de058cf2e658738\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:02:51.733348 kubelet[2349]: I0117 12:02:51.733321 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d3649c826cdee988de058cf2e658738-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7d3649c826cdee988de058cf2e658738\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:02:51.733348 kubelet[2349]: I0117 12:02:51.733349 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d3649c826cdee988de058cf2e658738-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7d3649c826cdee988de058cf2e658738\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:02:51.733450 kubelet[2349]: I0117 12:02:51.733391 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:02:51.733450 kubelet[2349]: I0117 12:02:51.733414 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:02:51.733518 kubelet[2349]: I0117 12:02:51.733485 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:02:51.733548 kubelet[2349]: I0117 12:02:51.733538 2349 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:02:51.933480 kubelet[2349]: E0117 12:02:51.933378 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="800ms" Jan 17 12:02:51.948736 kubelet[2349]: E0117 12:02:51.948699 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:51.949352 containerd[1541]: time="2025-01-17T12:02:51.949313399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:605dd245551545e29d4e79fb03fd341e,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:51.949712 kubelet[2349]: E0117 12:02:51.949340 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:51.949712 kubelet[2349]: E0117 12:02:51.949452 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:51.949811 containerd[1541]: time="2025-01-17T12:02:51.949755828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7d3649c826cdee988de058cf2e658738,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:51.950393 containerd[1541]: time="2025-01-17T12:02:51.950313776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd466de870bdf0e573d7965dbd759acf,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:52.035701 kubelet[2349]: I0117 12:02:52.035665 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:02:52.036017 kubelet[2349]: E0117 12:02:52.035988 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jan 17 12:02:52.377141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742462033.mount: Deactivated successfully. Jan 17 12:02:52.378885 kubelet[2349]: W0117 12:02:52.378858 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:52.379120 kubelet[2349]: E0117 12:02:52.378903 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:52.381548 containerd[1541]: time="2025-01-17T12:02:52.381438213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:52.383208 containerd[1541]: time="2025-01-17T12:02:52.383135713Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:02:52.383932 containerd[1541]: time="2025-01-17T12:02:52.383826316Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:52.385471 containerd[1541]: time="2025-01-17T12:02:52.385421066Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:52.386547 containerd[1541]: time="2025-01-17T12:02:52.386470055Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:52.386669 containerd[1541]: time="2025-01-17T12:02:52.386629702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:02:52.387475 containerd[1541]: time="2025-01-17T12:02:52.387439821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 17 12:02:52.389181 containerd[1541]: time="2025-01-17T12:02:52.389142883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:52.390018 containerd[1541]: time="2025-01-17T12:02:52.389981850Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 439.613216ms" Jan 17 12:02:52.392775 containerd[1541]: time="2025-01-17T12:02:52.392736182Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 443.341196ms" Jan 17 12:02:52.395481 containerd[1541]: time="2025-01-17T12:02:52.395344190Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 445.520059ms" Jan 17 12:02:52.559676 containerd[1541]: time="2025-01-17T12:02:52.559487757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:52.559676 containerd[1541]: time="2025-01-17T12:02:52.559653005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:52.559958 containerd[1541]: time="2025-01-17T12:02:52.559688456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:52.559958 containerd[1541]: time="2025-01-17T12:02:52.559732589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:52.559958 containerd[1541]: time="2025-01-17T12:02:52.559747193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:52.559958 containerd[1541]: time="2025-01-17T12:02:52.559817934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:52.559958 containerd[1541]: time="2025-01-17T12:02:52.559672411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:52.559958 containerd[1541]: time="2025-01-17T12:02:52.559809692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:52.560138 containerd[1541]: time="2025-01-17T12:02:52.559951013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:52.560138 containerd[1541]: time="2025-01-17T12:02:52.559994706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:52.560138 containerd[1541]: time="2025-01-17T12:02:52.560005509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:52.560138 containerd[1541]: time="2025-01-17T12:02:52.560073369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:52.607985 containerd[1541]: time="2025-01-17T12:02:52.607864811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd466de870bdf0e573d7965dbd759acf,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf9bed012fb91a37b70206eb97d564860b257121511d8c5571a1dbc9373c50e1\"" Jan 17 12:02:52.608929 containerd[1541]: time="2025-01-17T12:02:52.608896716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7d3649c826cdee988de058cf2e658738,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2fde3cf619a92149d800c3f699dc64854afa791674d7616e2286e9a423db06b\"" Jan 17 12:02:52.609910 kubelet[2349]: E0117 12:02:52.609821 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:52.610486 kubelet[2349]: E0117 12:02:52.609948 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:52.613392 containerd[1541]: time="2025-01-17T12:02:52.613123961Z" level=info msg="CreateContainer within sandbox \"cf9bed012fb91a37b70206eb97d564860b257121511d8c5571a1dbc9373c50e1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:02:52.613392 containerd[1541]: time="2025-01-17T12:02:52.613245477Z" level=info msg="CreateContainer within sandbox \"d2fde3cf619a92149d800c3f699dc64854afa791674d7616e2286e9a423db06b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:02:52.614343 containerd[1541]: time="2025-01-17T12:02:52.614320714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:605dd245551545e29d4e79fb03fd341e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2367016687ea1875f151c92c50ad56740ba9454617dd7292ad764f96a4ede45d\"" Jan 17 12:02:52.615030 kubelet[2349]: E0117 12:02:52.615010 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:52.616952 containerd[1541]: time="2025-01-17T12:02:52.616916159Z" level=info msg="CreateContainer within sandbox \"2367016687ea1875f151c92c50ad56740ba9454617dd7292ad764f96a4ede45d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:02:52.627931 containerd[1541]: time="2025-01-17T12:02:52.627309061Z" level=info msg="CreateContainer within sandbox \"cf9bed012fb91a37b70206eb97d564860b257121511d8c5571a1dbc9373c50e1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fde59490be4cf0d7e5ee88297d880ab75107ece98d3fd0ffa618181dc8836d24\"" Jan 17 12:02:52.630744 containerd[1541]: time="2025-01-17T12:02:52.630716865Z" level=info msg="StartContainer for \"fde59490be4cf0d7e5ee88297d880ab75107ece98d3fd0ffa618181dc8836d24\"" Jan 17 12:02:52.637554 containerd[1541]: time="2025-01-17T12:02:52.637489861Z" level=info msg="CreateContainer within sandbox \"d2fde3cf619a92149d800c3f699dc64854afa791674d7616e2286e9a423db06b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d2d204fcd0574b47a64338e7215acf574b1b87f58259e0de69c70d7ecc1863c8\"" Jan 17 12:02:52.638391 containerd[1541]: time="2025-01-17T12:02:52.638265449Z" level=info msg="StartContainer for \"d2d204fcd0574b47a64338e7215acf574b1b87f58259e0de69c70d7ecc1863c8\"" Jan 17 12:02:52.640406 containerd[1541]: time="2025-01-17T12:02:52.640356065Z" level=info msg="CreateContainer within sandbox \"2367016687ea1875f151c92c50ad56740ba9454617dd7292ad764f96a4ede45d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9e9bf1cf5c25b9d33a192df162508f30a4424c059494c010e7bea3bc8b8d7746\"" Jan 17 12:02:52.640903 containerd[1541]: time="2025-01-17T12:02:52.640862214Z" level=info msg="StartContainer for \"9e9bf1cf5c25b9d33a192df162508f30a4424c059494c010e7bea3bc8b8d7746\"" Jan 17 12:02:52.690165 containerd[1541]: time="2025-01-17T12:02:52.690118488Z" level=info msg="StartContainer for \"d2d204fcd0574b47a64338e7215acf574b1b87f58259e0de69c70d7ecc1863c8\" returns successfully" Jan 17 12:02:52.690407 containerd[1541]: time="2025-01-17T12:02:52.690232282Z" level=info msg="StartContainer for \"fde59490be4cf0d7e5ee88297d880ab75107ece98d3fd0ffa618181dc8836d24\" returns successfully" Jan 17 12:02:52.701009 containerd[1541]: time="2025-01-17T12:02:52.696250455Z" level=info msg="StartContainer for \"9e9bf1cf5c25b9d33a192df162508f30a4424c059494c010e7bea3bc8b8d7746\" returns successfully" Jan 17 12:02:52.734762 kubelet[2349]: E0117 12:02:52.734724 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.47:6443: connect: connection refused" interval="1.6s" Jan 17 12:02:52.810770 kubelet[2349]: W0117 12:02:52.810708 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:52.810770 kubelet[2349]: E0117 12:02:52.810772 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:52.838576 kubelet[2349]: I0117 12:02:52.838543 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:02:52.839311 kubelet[2349]: E0117 12:02:52.839287 2349 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.47:6443/api/v1/nodes\": dial tcp 10.0.0.47:6443: connect: connection refused" node="localhost" Jan 17 12:02:52.866171 kubelet[2349]: W0117 12:02:52.866065 2349 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:52.866171 kubelet[2349]: E0117 12:02:52.866178 2349 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.47:6443: connect: connection refused Jan 17 12:02:53.355493 kubelet[2349]: E0117 12:02:53.355190 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:53.355493 kubelet[2349]: E0117 12:02:53.355439 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:53.357534 kubelet[2349]: E0117 12:02:53.357394 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:54.339606 kubelet[2349]: E0117 12:02:54.339438 2349 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 12:02:54.359323 kubelet[2349]: E0117 12:02:54.359219 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:54.441141 kubelet[2349]: I0117 12:02:54.441071 2349 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:02:54.449861 kubelet[2349]: I0117 12:02:54.449680 2349 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:02:54.456528 kubelet[2349]: E0117 12:02:54.456475 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:02:54.556981 kubelet[2349]: E0117 12:02:54.556950 2349 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 12:02:54.618851 kubelet[2349]: E0117 12:02:54.618768 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:55.324250 kubelet[2349]: I0117 12:02:55.324201 2349 apiserver.go:52] "Watching apiserver" Jan 17 12:02:55.331626 kubelet[2349]: I0117 12:02:55.331605 2349 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:02:56.711342 systemd[1]: Reloading requested from client PID 2628 ('systemctl') (unit session-7.scope)... Jan 17 12:02:56.711357 systemd[1]: Reloading... Jan 17 12:02:56.772541 zram_generator::config[2667]: No configuration found. Jan 17 12:02:56.866540 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:02:56.928752 systemd[1]: Reloading finished in 217 ms. Jan 17 12:02:56.956412 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:56.956768 kubelet[2349]: I0117 12:02:56.956432 2349 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:02:56.967653 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:02:56.967964 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:56.980751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:57.071980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:57.075768 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:02:57.130696 kubelet[2719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:57.130696 kubelet[2719]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:02:57.130696 kubelet[2719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:57.131051 kubelet[2719]: I0117 12:02:57.130747 2719 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:02:57.135101 kubelet[2719]: I0117 12:02:57.135076 2719 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:02:57.135101 kubelet[2719]: I0117 12:02:57.135102 2719 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:02:57.135327 kubelet[2719]: I0117 12:02:57.135310 2719 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:02:57.137073 kubelet[2719]: I0117 12:02:57.137041 2719 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:02:57.138933 kubelet[2719]: I0117 12:02:57.138903 2719 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:02:57.146740 kubelet[2719]: I0117 12:02:57.145225 2719 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:02:57.146740 kubelet[2719]: I0117 12:02:57.145700 2719 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:02:57.146740 kubelet[2719]: I0117 12:02:57.145845 2719 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:02:57.146740 kubelet[2719]: I0117 12:02:57.145866 2719 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:02:57.146740 kubelet[2719]: I0117 12:02:57.145874 2719 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:02:57.146740 kubelet[2719]: I0117 12:02:57.145907 2719 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:57.146988 kubelet[2719]: I0117 12:02:57.145990 2719 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:02:57.146988 kubelet[2719]: I0117 12:02:57.146002 2719 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:02:57.146988 kubelet[2719]: I0117 12:02:57.146021 2719 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:02:57.146988 kubelet[2719]: I0117 12:02:57.146035 2719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:02:57.147780 kubelet[2719]: I0117 12:02:57.147179 2719 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:02:57.147780 kubelet[2719]: I0117 12:02:57.147356 2719 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:02:57.147780 kubelet[2719]: I0117 12:02:57.147706 2719 server.go:1256] "Started kubelet" Jan 17 12:02:57.154651 kubelet[2719]: I0117 12:02:57.154401 2719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:02:57.157835 sudo[2734]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 12:02:57.158100 sudo[2734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 12:02:57.169781 kubelet[2719]: I0117 12:02:57.165893 2719 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:02:57.169781 kubelet[2719]: I0117 12:02:57.166586 2719 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:02:57.169781 kubelet[2719]: I0117 12:02:57.167423 2719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:02:57.169781 kubelet[2719]: I0117 12:02:57.167580 2719 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:02:57.169781 kubelet[2719]: I0117 12:02:57.168868 2719 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:02:57.174443 kubelet[2719]: I0117 12:02:57.174414 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:02:57.176702 kubelet[2719]: I0117 12:02:57.176009 2719 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:02:57.176856 kubelet[2719]: I0117 12:02:57.176052 2719 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:02:57.176947 kubelet[2719]: E0117 12:02:57.176099 2719 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:02:57.176947 kubelet[2719]: I0117 12:02:57.176294 2719 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:02:57.176947 kubelet[2719]: I0117 12:02:57.176547 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:02:57.177034 kubelet[2719]: I0117 12:02:57.176969 2719 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:02:57.177034 kubelet[2719]: I0117 12:02:57.176986 2719 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:02:57.177034 kubelet[2719]: I0117 12:02:57.177019 2719 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:02:57.177162 kubelet[2719]: E0117 12:02:57.177026 2719 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:02:57.179275 kubelet[2719]: I0117 12:02:57.179156 2719 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:02:57.224731 kubelet[2719]: I0117 12:02:57.224641 2719 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:02:57.225087 kubelet[2719]: I0117 12:02:57.224833 2719 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:02:57.225087 kubelet[2719]: I0117 12:02:57.224854 2719 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:57.225087 kubelet[2719]: I0117 12:02:57.224983 2719 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:02:57.225087 kubelet[2719]: I0117 12:02:57.225002 2719 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:02:57.225087 kubelet[2719]: I0117 12:02:57.225008 2719 policy_none.go:49] "None policy: Start" Jan 17 12:02:57.225629 kubelet[2719]: I0117 12:02:57.225610 2719 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:02:57.226327 kubelet[2719]: I0117 12:02:57.225745 2719 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:02:57.226327 kubelet[2719]: I0117 12:02:57.225903 2719 state_mem.go:75] "Updated machine memory state" Jan 17 12:02:57.227071 kubelet[2719]: I0117 12:02:57.226928 2719 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:02:57.227158 kubelet[2719]: I0117 12:02:57.227138 2719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:02:57.272487 kubelet[2719]: I0117 12:02:57.272457 2719 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:02:57.278104 kubelet[2719]: I0117 12:02:57.278050 2719 topology_manager.go:215] "Topology Admit Handler" podUID="605dd245551545e29d4e79fb03fd341e" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:02:57.278177 kubelet[2719]: I0117 12:02:57.278116 2719 topology_manager.go:215] "Topology Admit Handler" podUID="7d3649c826cdee988de058cf2e658738" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:02:57.278177 kubelet[2719]: I0117 12:02:57.278168 2719 topology_manager.go:215] "Topology Admit Handler" podUID="dd466de870bdf0e573d7965dbd759acf" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:02:57.282259 kubelet[2719]: I0117 12:02:57.282223 2719 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 17 12:02:57.282353 kubelet[2719]: I0117 12:02:57.282291 2719 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:02:57.478332 kubelet[2719]: I0117 12:02:57.478212 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d3649c826cdee988de058cf2e658738-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7d3649c826cdee988de058cf2e658738\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:02:57.478332 kubelet[2719]: I0117 12:02:57.478259 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d3649c826cdee988de058cf2e658738-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7d3649c826cdee988de058cf2e658738\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:02:57.478332 kubelet[2719]: I0117 12:02:57.478283 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:02:57.478332 kubelet[2719]: I0117 12:02:57.478304 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:02:57.478514 kubelet[2719]: I0117 12:02:57.478350 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605dd245551545e29d4e79fb03fd341e-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"605dd245551545e29d4e79fb03fd341e\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:02:57.478514 kubelet[2719]: I0117 12:02:57.478370 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d3649c826cdee988de058cf2e658738-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7d3649c826cdee988de058cf2e658738\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:02:57.478514 kubelet[2719]: I0117 12:02:57.478388 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:02:57.478514 kubelet[2719]: I0117 12:02:57.478406 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:02:57.478514 kubelet[2719]: I0117 12:02:57.478426 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:02:57.589266 kubelet[2719]: E0117 12:02:57.589034 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:57.589266 kubelet[2719]: E0117 12:02:57.589118 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:57.590016 kubelet[2719]: E0117 12:02:57.589489 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:57.608685 sudo[2734]: pam_unix(sudo:session): session closed for user root Jan 17 12:02:58.147583 kubelet[2719]: I0117 12:02:58.147497 2719 apiserver.go:52] "Watching apiserver" Jan 17 12:02:58.177913 kubelet[2719]: I0117 12:02:58.177877 2719 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:02:58.198506 kubelet[2719]: E0117 12:02:58.196864 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:58.198506 kubelet[2719]: E0117 12:02:58.197059 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:58.204072 kubelet[2719]: E0117 12:02:58.204047 2719 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 12:02:58.205840 kubelet[2719]: E0117 12:02:58.205820 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:58.229045 kubelet[2719]: I0117 12:02:58.229011 2719 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.22896215 podStartE2EDuration="1.22896215s" podCreationTimestamp="2025-01-17 12:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:58.22046163 +0000 UTC m=+1.138885918" watchObservedRunningTime="2025-01-17 12:02:58.22896215 +0000 UTC m=+1.147386478" Jan 17 12:02:58.229154 kubelet[2719]: I0117 12:02:58.229109 2719 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.229092487 podStartE2EDuration="1.229092487s" podCreationTimestamp="2025-01-17 12:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:58.228855976 +0000 UTC m=+1.147280264" watchObservedRunningTime="2025-01-17 12:02:58.229092487 +0000 UTC m=+1.147516775" Jan 17 12:02:58.236203 kubelet[2719]: I0117 12:02:58.236163 2719 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.236132135 podStartE2EDuration="1.236132135s" podCreationTimestamp="2025-01-17 12:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:58.235624468 +0000 UTC m=+1.154048756" watchObservedRunningTime="2025-01-17 12:02:58.236132135 +0000 UTC m=+1.154556423" Jan 17 12:02:59.199804 kubelet[2719]: E0117 12:02:59.199767 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:59.358993 kubelet[2719]: E0117 12:02:59.358950 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:02:59.737626 sudo[1752]: pam_unix(sudo:session): session closed for user root Jan 17 12:02:59.740467 sshd[1745]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:59.744479 systemd-logind[1522]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:02:59.746458 systemd[1]: sshd@6-10.0.0.47:22-10.0.0.1:46656.service: Deactivated successfully. Jan 17 12:02:59.748136 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:02:59.748860 systemd-logind[1522]: Removed session 7. Jan 17 12:03:02.850111 kubelet[2719]: E0117 12:03:02.850074 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:03.206862 kubelet[2719]: E0117 12:03:03.205526 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:03.251449 kubelet[2719]: E0117 12:03:03.251375 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:04.207158 kubelet[2719]: E0117 12:03:04.207113 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:09.365842 kubelet[2719]: E0117 12:03:09.365813 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:10.145612 update_engine[1528]: I20250117 12:03:10.144888 1528 update_attempter.cc:509] Updating boot flags... Jan 17 12:03:10.172962 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2802) Jan 17 12:03:10.205550 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2803) Jan 17 12:03:10.232298 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2803) Jan 17 12:03:12.590917 kubelet[2719]: I0117 12:03:12.590871 2719 topology_manager.go:215] "Topology Admit Handler" podUID="78963eea-ae56-466f-ab8c-bbf584f2f2ca" podNamespace="kube-system" podName="kube-proxy-xg4sr" Jan 17 12:03:12.600654 kubelet[2719]: I0117 12:03:12.600211 2719 topology_manager.go:215] "Topology Admit Handler" podUID="768ee00b-c803-487a-b8bc-67bf7ac9aaf9" podNamespace="kube-system" podName="cilium-5q4sj" Jan 17 12:03:12.670760 kubelet[2719]: I0117 12:03:12.670721 2719 topology_manager.go:215] "Topology Admit Handler" podUID="d05e36f6-9712-4118-8564-36ed2a5cf68c" podNamespace="kube-system" podName="cilium-operator-5cc964979-vfhf6" Jan 17 12:03:12.723605 kubelet[2719]: I0117 12:03:12.723565 2719 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:03:12.739991 containerd[1541]: time="2025-01-17T12:03:12.739945370Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:03:12.740542 kubelet[2719]: I0117 12:03:12.740522 2719 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:03:12.778894 kubelet[2719]: I0117 12:03:12.778632 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-lib-modules\") pod \"cilium-5q4sj\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " pod="kube-system/cilium-5q4sj" Jan 17 12:03:12.778894 kubelet[2719]: I0117 12:03:12.778678 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-clustermesh-secrets\") pod \"cilium-5q4sj\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " pod="kube-system/cilium-5q4sj" Jan 17 12:03:12.778894 kubelet[2719]: I0117 12:03:12.778701 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-hubble-tls\") pod \"cilium-5q4sj\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " pod="kube-system/cilium-5q4sj" Jan 17 12:03:12.778894 kubelet[2719]: I0117 12:03:12.778727 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6wlv\" (UniqueName: \"kubernetes.io/projected/d05e36f6-9712-4118-8564-36ed2a5cf68c-kube-api-access-d6wlv\") pod \"cilium-operator-5cc964979-vfhf6\" (UID: \"d05e36f6-9712-4118-8564-36ed2a5cf68c\") " pod="kube-system/cilium-operator-5cc964979-vfhf6" Jan 17 12:03:12.778894 kubelet[2719]: I0117 12:03:12.778750 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78963eea-ae56-466f-ab8c-bbf584f2f2ca-lib-modules\") pod \"kube-proxy-xg4sr\" (UID: \"78963eea-ae56-466f-ab8c-bbf584f2f2ca\") " pod="kube-system/kube-proxy-xg4sr" Jan 17 12:03:12.780025 kubelet[2719]: I0117 12:03:12.778769 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cilium-config-path\") pod \"cilium-5q4sj\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " pod="kube-system/cilium-5q4sj" Jan 17 12:03:12.780025 kubelet[2719]: I0117 12:03:12.778789 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pfb\" (UniqueName: \"kubernetes.io/projected/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-kube-api-access-h4pfb\") pod \"cilium-5q4sj\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " pod="kube-system/cilium-5q4sj" Jan 17 12:03:12.780025 kubelet[2719]: I0117 12:03:12.778810 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cni-path\") pod \"cilium-5q4sj\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " pod="kube-system/cilium-5q4sj" Jan 17 12:03:12.780025 kubelet[2719]: I0117 12:03:12.778836 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-bpf-maps\") pod \"cilium-5q4sj\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " pod="kube-system/cilium-5q4sj" Jan 17 12:03:12.780025 kubelet[2719]: I0117 12:03:12.778857 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-host-proc-sys-net\") pod \"cilium-5q4sj\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " pod="kube-system/cilium-5q4sj" Jan 17 12:03:12.780125 kubelet[2719]: I0117 12:03:12.778877 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d05e36f6-9712-4118-8564-36ed2a5cf68c-cilium-config-path\") pod \"cilium-operator-5cc964979-vfhf6\" (UID: \"d05e36f6-9712-4118-8564-36ed2a5cf68c\") " pod="kube-system/cilium-operator-5cc964979-vfhf6" Jan 17 12:03:12.780125 kubelet[2719]: I0117 12:03:12.778899 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78963eea-ae56-466f-ab8c-bbf584f2f2ca-xtables-lock\") pod \"kube-proxy-xg4sr\" (UID: \"78963eea-ae56-466f-ab8c-bbf584f2f2ca\") " pod="kube-system/kube-proxy-xg4sr" Jan 17 12:03:12.780125 kubelet[2719]: I0117 12:03:12.778921 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szrdf\" (UniqueName: \"kubernetes.io/projected/78963eea-ae56-466f-ab8c-bbf584f2f2ca-kube-api-access-szrdf\") pod \"kube-proxy-xg4sr\" (UID: \"78963eea-ae56-466f-ab8c-bbf584f2f2ca\") " pod="kube-system/kube-proxy-xg4sr" Jan 17 12:03:12.780125 kubelet[2719]: I0117 12:03:12.778950 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cilium-run\") pod \"cilium-5q4sj\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " pod="kube-system/cilium-5q4sj" Jan 17 12:03:12.780125 kubelet[2719]: I0117 12:03:12.778975 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/78963eea-ae56-466f-ab8c-bbf584f2f2ca-kube-proxy\") pod \"kube-proxy-xg4sr\" (UID: \"78963eea-ae56-466f-ab8c-bbf584f2f2ca\") " pod="kube-system/kube-proxy-xg4sr" Jan 17 12:03:12.780223 kubelet[2719]: I0117 12:03:12.778995 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-etc-cni-netd\") pod \"cilium-5q4sj\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " pod="kube-system/cilium-5q4sj" Jan 17 12:03:12.780223 kubelet[2719]: I0117 12:03:12.779016 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-host-proc-sys-kernel\") pod \"cilium-5q4sj\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " pod="kube-system/cilium-5q4sj" Jan 17 12:03:12.780223 kubelet[2719]: I0117 12:03:12.779038 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-hostproc\") pod \"cilium-5q4sj\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " pod="kube-system/cilium-5q4sj" Jan 17 12:03:12.780223 kubelet[2719]: I0117 12:03:12.779059 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cilium-cgroup\") pod \"cilium-5q4sj\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " pod="kube-system/cilium-5q4sj" Jan 17 12:03:12.780223 kubelet[2719]: I0117 12:03:12.779082 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-xtables-lock\") pod \"cilium-5q4sj\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " pod="kube-system/cilium-5q4sj" Jan 17 12:03:12.901211 kubelet[2719]: E0117 12:03:12.901122 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:12.903718 containerd[1541]: time="2025-01-17T12:03:12.903565452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xg4sr,Uid:78963eea-ae56-466f-ab8c-bbf584f2f2ca,Namespace:kube-system,Attempt:0,}" Jan 17 12:03:12.909878 kubelet[2719]: E0117 12:03:12.909288 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:12.909995 containerd[1541]: time="2025-01-17T12:03:12.909821884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5q4sj,Uid:768ee00b-c803-487a-b8bc-67bf7ac9aaf9,Namespace:kube-system,Attempt:0,}" Jan 17 12:03:12.926654 containerd[1541]: time="2025-01-17T12:03:12.925765442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:12.926654 containerd[1541]: time="2025-01-17T12:03:12.925833246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:12.926654 containerd[1541]: time="2025-01-17T12:03:12.925852447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:12.926654 containerd[1541]: time="2025-01-17T12:03:12.925947933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:12.932705 containerd[1541]: time="2025-01-17T12:03:12.932587429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:12.933182 containerd[1541]: time="2025-01-17T12:03:12.933011695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:12.933182 containerd[1541]: time="2025-01-17T12:03:12.933032656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:12.933182 containerd[1541]: time="2025-01-17T12:03:12.933119582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:12.962795 containerd[1541]: time="2025-01-17T12:03:12.962754317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xg4sr,Uid:78963eea-ae56-466f-ab8c-bbf584f2f2ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d8bd0fb9ad13713e396df8fa5253360ff17fdd125b43fd4fac7e723c620d7d0\"" Jan 17 12:03:12.963147 containerd[1541]: time="2025-01-17T12:03:12.963072137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5q4sj,Uid:768ee00b-c803-487a-b8bc-67bf7ac9aaf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3\"" Jan 17 12:03:12.965523 kubelet[2719]: E0117 12:03:12.965257 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:12.965863 kubelet[2719]: E0117 12:03:12.965691 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:12.967459 containerd[1541]: time="2025-01-17T12:03:12.967430450Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 12:03:12.971820 containerd[1541]: time="2025-01-17T12:03:12.971776562Z" level=info msg="CreateContainer within sandbox \"4d8bd0fb9ad13713e396df8fa5253360ff17fdd125b43fd4fac7e723c620d7d0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:03:12.974335 kubelet[2719]: E0117 12:03:12.974298 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:12.974755 containerd[1541]: time="2025-01-17T12:03:12.974713586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-vfhf6,Uid:d05e36f6-9712-4118-8564-36ed2a5cf68c,Namespace:kube-system,Attempt:0,}" Jan 17 12:03:12.992516 containerd[1541]: time="2025-01-17T12:03:12.991963025Z" level=info msg="CreateContainer within sandbox \"4d8bd0fb9ad13713e396df8fa5253360ff17fdd125b43fd4fac7e723c620d7d0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"812f043a933670cc439c7275c6111a5d7ad95df03b281bd77c6592516ca62b7b\"" Jan 17 12:03:12.995938 containerd[1541]: time="2025-01-17T12:03:12.995905832Z" level=info msg="StartContainer for \"812f043a933670cc439c7275c6111a5d7ad95df03b281bd77c6592516ca62b7b\"" Jan 17 12:03:12.997457 containerd[1541]: time="2025-01-17T12:03:12.997290479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:12.997457 containerd[1541]: time="2025-01-17T12:03:12.997338162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:12.997633 containerd[1541]: time="2025-01-17T12:03:12.997582657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:12.997803 containerd[1541]: time="2025-01-17T12:03:12.997739347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:13.038161 containerd[1541]: time="2025-01-17T12:03:13.038085805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-vfhf6,Uid:d05e36f6-9712-4118-8564-36ed2a5cf68c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbd80810bc5b025dd3520363b025d72488c1c504278c49716dc9445aaea61693\"" Jan 17 12:03:13.038792 kubelet[2719]: E0117 12:03:13.038769 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:13.052186 containerd[1541]: time="2025-01-17T12:03:13.052085080Z" level=info msg="StartContainer for \"812f043a933670cc439c7275c6111a5d7ad95df03b281bd77c6592516ca62b7b\" returns successfully" Jan 17 12:03:13.230619 kubelet[2719]: E0117 12:03:13.230296 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:13.248256 kubelet[2719]: I0117 12:03:13.248206 2719 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xg4sr" podStartSLOduration=1.248168497 podStartE2EDuration="1.248168497s" podCreationTimestamp="2025-01-17 12:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:03:13.248130855 +0000 UTC m=+16.166555103" watchObservedRunningTime="2025-01-17 12:03:13.248168497 +0000 UTC m=+16.166592785" Jan 17 12:03:19.048969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2795337977.mount: Deactivated successfully. Jan 17 12:03:21.405483 containerd[1541]: time="2025-01-17T12:03:21.405438102Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:21.409517 containerd[1541]: time="2025-01-17T12:03:21.406968246Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651506" Jan 17 12:03:21.409517 containerd[1541]: time="2025-01-17T12:03:21.407217296Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:21.410514 containerd[1541]: time="2025-01-17T12:03:21.410470753Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.443005181s" Jan 17 12:03:21.410567 containerd[1541]: time="2025-01-17T12:03:21.410521635Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 17 12:03:21.413778 containerd[1541]: time="2025-01-17T12:03:21.413744650Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 12:03:21.427025 containerd[1541]: time="2025-01-17T12:03:21.426987204Z" level=info msg="CreateContainer within sandbox \"1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:03:21.452560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1551056797.mount: Deactivated successfully. Jan 17 12:03:21.463907 containerd[1541]: time="2025-01-17T12:03:21.463862747Z" level=info msg="CreateContainer within sandbox \"1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53\"" Jan 17 12:03:21.464446 containerd[1541]: time="2025-01-17T12:03:21.464419050Z" level=info msg="StartContainer for \"216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53\"" Jan 17 12:03:21.507610 containerd[1541]: time="2025-01-17T12:03:21.507535614Z" level=info msg="StartContainer for \"216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53\" returns successfully" Jan 17 12:03:21.651280 containerd[1541]: time="2025-01-17T12:03:21.646474789Z" level=info msg="shim disconnected" id=216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53 namespace=k8s.io Jan 17 12:03:21.651280 containerd[1541]: time="2025-01-17T12:03:21.651275190Z" level=warning msg="cleaning up after shim disconnected" id=216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53 namespace=k8s.io Jan 17 12:03:21.651280 containerd[1541]: time="2025-01-17T12:03:21.651288030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:22.262859 kubelet[2719]: E0117 12:03:22.262821 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:22.266082 containerd[1541]: time="2025-01-17T12:03:22.266041399Z" level=info msg="CreateContainer within sandbox \"1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:03:22.298065 containerd[1541]: time="2025-01-17T12:03:22.297961962Z" level=info msg="CreateContainer within sandbox \"1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0\"" Jan 17 12:03:22.299345 containerd[1541]: time="2025-01-17T12:03:22.298549506Z" level=info msg="StartContainer for \"e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0\"" Jan 17 12:03:22.342141 containerd[1541]: time="2025-01-17T12:03:22.342104657Z" level=info msg="StartContainer for \"e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0\" returns successfully" Jan 17 12:03:22.360146 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:03:22.360840 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:03:22.360914 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:03:22.367563 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:03:22.377445 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:03:22.384258 containerd[1541]: time="2025-01-17T12:03:22.384200509Z" level=info msg="shim disconnected" id=e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0 namespace=k8s.io Jan 17 12:03:22.384258 containerd[1541]: time="2025-01-17T12:03:22.384253351Z" level=warning msg="cleaning up after shim disconnected" id=e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0 namespace=k8s.io Jan 17 12:03:22.384258 containerd[1541]: time="2025-01-17T12:03:22.384261712Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:22.450383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53-rootfs.mount: Deactivated successfully. Jan 17 12:03:22.811177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2769384376.mount: Deactivated successfully. Jan 17 12:03:23.264744 kubelet[2719]: E0117 12:03:23.264712 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:23.269339 containerd[1541]: time="2025-01-17T12:03:23.269293036Z" level=info msg="CreateContainer within sandbox \"1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:03:23.307089 containerd[1541]: time="2025-01-17T12:03:23.307034535Z" level=info msg="CreateContainer within sandbox \"1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c\"" Jan 17 12:03:23.307880 containerd[1541]: time="2025-01-17T12:03:23.307600557Z" level=info msg="StartContainer for \"d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c\"" Jan 17 12:03:23.359459 containerd[1541]: time="2025-01-17T12:03:23.359410920Z" level=info msg="StartContainer for \"d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c\" returns successfully" Jan 17 12:03:23.432168 containerd[1541]: time="2025-01-17T12:03:23.432094250Z" level=info msg="shim disconnected" id=d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c namespace=k8s.io Jan 17 12:03:23.432168 containerd[1541]: time="2025-01-17T12:03:23.432158092Z" level=warning msg="cleaning up after shim disconnected" id=d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c namespace=k8s.io Jan 17 12:03:23.432168 containerd[1541]: time="2025-01-17T12:03:23.432167852Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:24.270040 kubelet[2719]: E0117 12:03:24.268285 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:24.273543 containerd[1541]: time="2025-01-17T12:03:24.270846363Z" level=info msg="CreateContainer within sandbox \"1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:03:24.302353 containerd[1541]: time="2025-01-17T12:03:24.302283093Z" level=info msg="CreateContainer within sandbox \"1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77\"" Jan 17 12:03:24.303346 containerd[1541]: time="2025-01-17T12:03:24.302777511Z" level=info msg="StartContainer for \"c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77\"" Jan 17 12:03:24.357711 containerd[1541]: time="2025-01-17T12:03:24.357595271Z" level=info msg="StartContainer for \"c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77\" returns successfully" Jan 17 12:03:24.416265 containerd[1541]: time="2025-01-17T12:03:24.416199771Z" level=info msg="shim disconnected" id=c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77 namespace=k8s.io Jan 17 12:03:24.416452 containerd[1541]: time="2025-01-17T12:03:24.416285175Z" level=warning msg="cleaning up after shim disconnected" id=c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77 namespace=k8s.io Jan 17 12:03:24.416452 containerd[1541]: time="2025-01-17T12:03:24.416300575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:24.450017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77-rootfs.mount: Deactivated successfully. Jan 17 12:03:25.272299 kubelet[2719]: E0117 12:03:25.272169 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:25.280643 containerd[1541]: time="2025-01-17T12:03:25.280580236Z" level=info msg="CreateContainer within sandbox \"1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:03:25.296397 containerd[1541]: time="2025-01-17T12:03:25.296345721Z" level=info msg="CreateContainer within sandbox \"1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287\"" Jan 17 12:03:25.297301 containerd[1541]: time="2025-01-17T12:03:25.297259834Z" level=info msg="StartContainer for \"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287\"" Jan 17 12:03:25.345467 containerd[1541]: time="2025-01-17T12:03:25.345418440Z" level=info msg="StartContainer for \"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287\" returns successfully" Jan 17 12:03:25.481277 kubelet[2719]: I0117 12:03:25.481247 2719 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:03:25.508316 kubelet[2719]: I0117 12:03:25.508265 2719 topology_manager.go:215] "Topology Admit Handler" podUID="d5afd7eb-f893-4d07-b23f-b0270e80dc6e" podNamespace="kube-system" podName="coredns-76f75df574-69t8s" Jan 17 12:03:25.510485 kubelet[2719]: I0117 12:03:25.509430 2719 topology_manager.go:215] "Topology Admit Handler" podUID="d9261aa5-af66-4a41-879a-c68f2d901140" podNamespace="kube-system" podName="coredns-76f75df574-9vxhz" Jan 17 12:03:25.581399 kubelet[2719]: I0117 12:03:25.581292 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7pr9\" (UniqueName: \"kubernetes.io/projected/d5afd7eb-f893-4d07-b23f-b0270e80dc6e-kube-api-access-b7pr9\") pod \"coredns-76f75df574-69t8s\" (UID: \"d5afd7eb-f893-4d07-b23f-b0270e80dc6e\") " pod="kube-system/coredns-76f75df574-69t8s" Jan 17 12:03:25.582032 kubelet[2719]: I0117 12:03:25.581891 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5afd7eb-f893-4d07-b23f-b0270e80dc6e-config-volume\") pod \"coredns-76f75df574-69t8s\" (UID: \"d5afd7eb-f893-4d07-b23f-b0270e80dc6e\") " pod="kube-system/coredns-76f75df574-69t8s" Jan 17 12:03:25.582336 kubelet[2719]: I0117 12:03:25.582319 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-655mb\" (UniqueName: \"kubernetes.io/projected/d9261aa5-af66-4a41-879a-c68f2d901140-kube-api-access-655mb\") pod \"coredns-76f75df574-9vxhz\" (UID: \"d9261aa5-af66-4a41-879a-c68f2d901140\") " pod="kube-system/coredns-76f75df574-9vxhz" Jan 17 12:03:25.582545 kubelet[2719]: I0117 12:03:25.582489 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9261aa5-af66-4a41-879a-c68f2d901140-config-volume\") pod \"coredns-76f75df574-9vxhz\" (UID: \"d9261aa5-af66-4a41-879a-c68f2d901140\") " pod="kube-system/coredns-76f75df574-9vxhz" Jan 17 12:03:25.818277 kubelet[2719]: E0117 12:03:25.818219 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:25.818277 kubelet[2719]: E0117 12:03:25.818277 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:25.819260 containerd[1541]: time="2025-01-17T12:03:25.818923497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9vxhz,Uid:d9261aa5-af66-4a41-879a-c68f2d901140,Namespace:kube-system,Attempt:0,}" Jan 17 12:03:25.819521 containerd[1541]: time="2025-01-17T12:03:25.819476517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-69t8s,Uid:d5afd7eb-f893-4d07-b23f-b0270e80dc6e,Namespace:kube-system,Attempt:0,}" Jan 17 12:03:26.105637 containerd[1541]: time="2025-01-17T12:03:26.105593682Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:26.106400 containerd[1541]: time="2025-01-17T12:03:26.106323147Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138370" Jan 17 12:03:26.113985 containerd[1541]: time="2025-01-17T12:03:26.112902014Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:26.114854 containerd[1541]: time="2025-01-17T12:03:26.114825601Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.70103875s" Jan 17 12:03:26.114924 containerd[1541]: time="2025-01-17T12:03:26.114856242Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 17 12:03:26.118870 containerd[1541]: time="2025-01-17T12:03:26.118812899Z" level=info msg="CreateContainer within sandbox \"dbd80810bc5b025dd3520363b025d72488c1c504278c49716dc9445aaea61693\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 12:03:26.139852 containerd[1541]: time="2025-01-17T12:03:26.139809545Z" level=info msg="CreateContainer within sandbox \"dbd80810bc5b025dd3520363b025d72488c1c504278c49716dc9445aaea61693\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027\"" Jan 17 12:03:26.140578 containerd[1541]: time="2025-01-17T12:03:26.140535850Z" level=info msg="StartContainer for \"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027\"" Jan 17 12:03:26.185375 containerd[1541]: time="2025-01-17T12:03:26.185335399Z" level=info msg="StartContainer for \"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027\" returns successfully" Jan 17 12:03:26.278938 kubelet[2719]: E0117 12:03:26.278787 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:26.286195 kubelet[2719]: E0117 12:03:26.286172 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:26.288707 kubelet[2719]: I0117 12:03:26.287642 2719 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-vfhf6" podStartSLOduration=1.2129243490000001 podStartE2EDuration="14.287607416s" podCreationTimestamp="2025-01-17 12:03:12 +0000 UTC" firstStartedPulling="2025-01-17 12:03:13.040408223 +0000 UTC m=+15.958832471" lastFinishedPulling="2025-01-17 12:03:26.11509125 +0000 UTC m=+29.033515538" observedRunningTime="2025-01-17 12:03:26.28684891 +0000 UTC m=+29.205273198" watchObservedRunningTime="2025-01-17 12:03:26.287607416 +0000 UTC m=+29.206031704" Jan 17 12:03:26.314821 kubelet[2719]: I0117 12:03:26.313745 2719 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5q4sj" podStartSLOduration=5.867178687 podStartE2EDuration="14.313700478s" podCreationTimestamp="2025-01-17 12:03:12 +0000 UTC" firstStartedPulling="2025-01-17 12:03:12.966388584 +0000 UTC m=+15.884812872" lastFinishedPulling="2025-01-17 12:03:21.412910335 +0000 UTC m=+24.331334663" observedRunningTime="2025-01-17 12:03:26.310711975 +0000 UTC m=+29.229136263" watchObservedRunningTime="2025-01-17 12:03:26.313700478 +0000 UTC m=+29.232124766" Jan 17 12:03:27.287676 kubelet[2719]: E0117 12:03:27.287282 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:27.288815 kubelet[2719]: E0117 12:03:27.288757 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:27.367734 systemd[1]: Started sshd@7-10.0.0.47:22-10.0.0.1:37072.service - OpenSSH per-connection server daemon (10.0.0.1:37072). Jan 17 12:03:27.400445 sshd[3572]: Accepted publickey for core from 10.0.0.1 port 37072 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:03:27.401645 sshd[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:27.405302 systemd-logind[1522]: New session 8 of user core. Jan 17 12:03:27.412852 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:03:27.539335 sshd[3572]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:27.543099 systemd[1]: sshd@7-10.0.0.47:22-10.0.0.1:37072.service: Deactivated successfully. Jan 17 12:03:27.545312 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:03:27.545958 systemd-logind[1522]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:03:27.546854 systemd-logind[1522]: Removed session 8. Jan 17 12:03:28.289421 kubelet[2719]: E0117 12:03:28.289394 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:29.629101 systemd-networkd[1232]: cilium_host: Link UP Jan 17 12:03:29.629562 systemd-networkd[1232]: cilium_net: Link UP Jan 17 12:03:29.629711 systemd-networkd[1232]: cilium_net: Gained carrier Jan 17 12:03:29.629831 systemd-networkd[1232]: cilium_host: Gained carrier Jan 17 12:03:29.712217 systemd-networkd[1232]: cilium_vxlan: Link UP Jan 17 12:03:29.712224 systemd-networkd[1232]: cilium_vxlan: Gained carrier Jan 17 12:03:30.040543 kernel: NET: Registered PF_ALG protocol family Jan 17 12:03:30.080688 systemd-networkd[1232]: cilium_net: Gained IPv6LL Jan 17 12:03:30.512265 systemd-networkd[1232]: cilium_host: Gained IPv6LL Jan 17 12:03:30.661520 systemd-networkd[1232]: lxc_health: Link UP Jan 17 12:03:30.667984 systemd-networkd[1232]: lxc_health: Gained carrier Jan 17 12:03:30.895726 systemd-networkd[1232]: cilium_vxlan: Gained IPv6LL Jan 17 12:03:30.919055 kubelet[2719]: E0117 12:03:30.918672 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:30.968128 systemd-networkd[1232]: lxc11ae1de3bdf6: Link UP Jan 17 12:03:30.975524 kernel: eth0: renamed from tmp87b65 Jan 17 12:03:30.986143 systemd-networkd[1232]: lxc01eed81a290f: Link UP Jan 17 12:03:30.986770 systemd-networkd[1232]: lxc11ae1de3bdf6: Gained carrier Jan 17 12:03:30.987538 kernel: eth0: renamed from tmpea692 Jan 17 12:03:30.994406 systemd-networkd[1232]: lxc01eed81a290f: Gained carrier Jan 17 12:03:32.314033 systemd-networkd[1232]: lxc_health: Gained IPv6LL Jan 17 12:03:32.319879 systemd-networkd[1232]: lxc01eed81a290f: Gained IPv6LL Jan 17 12:03:32.555752 systemd[1]: Started sshd@8-10.0.0.47:22-10.0.0.1:56534.service - OpenSSH per-connection server daemon (10.0.0.1:56534). Jan 17 12:03:32.627383 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 56534 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:03:32.628911 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:32.633700 systemd-logind[1522]: New session 9 of user core. Jan 17 12:03:32.638821 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:03:32.768638 sshd[3968]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:32.771905 systemd-logind[1522]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:03:32.772668 systemd[1]: sshd@8-10.0.0.47:22-10.0.0.1:56534.service: Deactivated successfully. Jan 17 12:03:32.774129 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:03:32.776551 systemd-logind[1522]: Removed session 9. Jan 17 12:03:32.944809 systemd-networkd[1232]: lxc11ae1de3bdf6: Gained IPv6LL Jan 17 12:03:34.481411 containerd[1541]: time="2025-01-17T12:03:34.481192176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:34.481411 containerd[1541]: time="2025-01-17T12:03:34.481241217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:34.481411 containerd[1541]: time="2025-01-17T12:03:34.481252098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:34.481411 containerd[1541]: time="2025-01-17T12:03:34.481339540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:34.489640 containerd[1541]: time="2025-01-17T12:03:34.489161590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:34.489640 containerd[1541]: time="2025-01-17T12:03:34.489216832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:34.489640 containerd[1541]: time="2025-01-17T12:03:34.489231792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:34.489640 containerd[1541]: time="2025-01-17T12:03:34.489309754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:34.505934 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:03:34.508133 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:03:34.526085 containerd[1541]: time="2025-01-17T12:03:34.526045782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-69t8s,Uid:d5afd7eb-f893-4d07-b23f-b0270e80dc6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"87b658b402aa2c57413fcbbf8136772c75c05ffa8c5a0417bfab02b382583c48\"" Jan 17 12:03:34.528035 kubelet[2719]: E0117 12:03:34.527486 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:34.528330 containerd[1541]: time="2025-01-17T12:03:34.527816230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9vxhz,Uid:d9261aa5-af66-4a41-879a-c68f2d901140,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea69234c7db9474f82fa4b99dcee5cc8e842a41cdc815eb8c4c71cd0e8f8011e\"" Jan 17 12:03:34.530562 kubelet[2719]: E0117 12:03:34.529395 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:34.533877 containerd[1541]: time="2025-01-17T12:03:34.532543437Z" level=info msg="CreateContainer within sandbox \"87b658b402aa2c57413fcbbf8136772c75c05ffa8c5a0417bfab02b382583c48\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:03:34.536548 containerd[1541]: time="2025-01-17T12:03:34.536483503Z" level=info msg="CreateContainer within sandbox \"ea69234c7db9474f82fa4b99dcee5cc8e842a41cdc815eb8c4c71cd0e8f8011e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:03:34.560035 containerd[1541]: time="2025-01-17T12:03:34.559991575Z" level=info msg="CreateContainer within sandbox \"87b658b402aa2c57413fcbbf8136772c75c05ffa8c5a0417bfab02b382583c48\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ece7ff6543bb27984be0032adad556215547da06c9a7f61517d3d699ac03867\"" Jan 17 12:03:34.561067 containerd[1541]: time="2025-01-17T12:03:34.560953320Z" level=info msg="StartContainer for \"6ece7ff6543bb27984be0032adad556215547da06c9a7f61517d3d699ac03867\"" Jan 17 12:03:34.563914 containerd[1541]: time="2025-01-17T12:03:34.563881279Z" level=info msg="CreateContainer within sandbox \"ea69234c7db9474f82fa4b99dcee5cc8e842a41cdc815eb8c4c71cd0e8f8011e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"023aaf2898772aaf631998a03ae6516d514c7822f8b690912fa86eaaae573f98\"" Jan 17 12:03:34.565432 containerd[1541]: time="2025-01-17T12:03:34.565399200Z" level=info msg="StartContainer for \"023aaf2898772aaf631998a03ae6516d514c7822f8b690912fa86eaaae573f98\"" Jan 17 12:03:34.617081 containerd[1541]: time="2025-01-17T12:03:34.617036068Z" level=info msg="StartContainer for \"6ece7ff6543bb27984be0032adad556215547da06c9a7f61517d3d699ac03867\" returns successfully" Jan 17 12:03:34.617208 containerd[1541]: time="2025-01-17T12:03:34.617060029Z" level=info msg="StartContainer for \"023aaf2898772aaf631998a03ae6516d514c7822f8b690912fa86eaaae573f98\" returns successfully" Jan 17 12:03:35.303662 kubelet[2719]: E0117 12:03:35.303629 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:35.307243 kubelet[2719]: E0117 12:03:35.307154 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:35.329687 kubelet[2719]: I0117 12:03:35.329594 2719 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-69t8s" podStartSLOduration=23.329555952 podStartE2EDuration="23.329555952s" podCreationTimestamp="2025-01-17 12:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:03:35.31724655 +0000 UTC m=+38.235670838" watchObservedRunningTime="2025-01-17 12:03:35.329555952 +0000 UTC m=+38.247980200" Jan 17 12:03:36.308678 kubelet[2719]: E0117 12:03:36.308602 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:36.309290 kubelet[2719]: E0117 12:03:36.308907 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:37.168323 kubelet[2719]: I0117 12:03:37.168258 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:03:37.169462 kubelet[2719]: E0117 12:03:37.169324 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:37.181300 kubelet[2719]: I0117 12:03:37.180882 2719 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9vxhz" podStartSLOduration=25.18084855 podStartE2EDuration="25.18084855s" podCreationTimestamp="2025-01-17 12:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:03:35.342684176 +0000 UTC m=+38.261108464" watchObservedRunningTime="2025-01-17 12:03:37.18084855 +0000 UTC m=+40.099272838" Jan 17 12:03:37.309824 kubelet[2719]: E0117 12:03:37.309783 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:37.310175 kubelet[2719]: E0117 12:03:37.309867 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:37.311814 kubelet[2719]: E0117 12:03:37.310815 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:03:37.777791 systemd[1]: Started sshd@9-10.0.0.47:22-10.0.0.1:56542.service - OpenSSH per-connection server daemon (10.0.0.1:56542). Jan 17 12:03:37.815742 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 56542 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:03:37.817021 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:37.820558 systemd-logind[1522]: New session 10 of user core. Jan 17 12:03:37.832746 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:03:37.949683 sshd[4160]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:37.953790 systemd[1]: sshd@9-10.0.0.47:22-10.0.0.1:56542.service: Deactivated successfully. Jan 17 12:03:37.955739 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:03:37.955750 systemd-logind[1522]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:03:37.957219 systemd-logind[1522]: Removed session 10. Jan 17 12:03:42.966748 systemd[1]: Started sshd@10-10.0.0.47:22-10.0.0.1:36938.service - OpenSSH per-connection server daemon (10.0.0.1:36938). Jan 17 12:03:42.998951 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 36938 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:03:43.000317 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:43.004558 systemd-logind[1522]: New session 11 of user core. Jan 17 12:03:43.015808 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:03:43.120883 sshd[4176]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:43.123669 systemd[1]: sshd@10-10.0.0.47:22-10.0.0.1:36938.service: Deactivated successfully. Jan 17 12:03:43.126219 systemd-logind[1522]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:03:43.126896 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:03:43.127882 systemd-logind[1522]: Removed session 11. Jan 17 12:03:48.135782 systemd[1]: Started sshd@11-10.0.0.47:22-10.0.0.1:36944.service - OpenSSH per-connection server daemon (10.0.0.1:36944). Jan 17 12:03:48.171154 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 36944 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:03:48.172376 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:48.176202 systemd-logind[1522]: New session 12 of user core. Jan 17 12:03:48.189811 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:03:48.315059 sshd[4195]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:48.325013 systemd[1]: Started sshd@12-10.0.0.47:22-10.0.0.1:36952.service - OpenSSH per-connection server daemon (10.0.0.1:36952). Jan 17 12:03:48.326545 systemd[1]: sshd@11-10.0.0.47:22-10.0.0.1:36944.service: Deactivated successfully. Jan 17 12:03:48.329537 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:03:48.330418 systemd-logind[1522]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:03:48.335021 systemd-logind[1522]: Removed session 12. Jan 17 12:03:48.384261 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 36952 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:03:48.385554 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:48.391065 systemd-logind[1522]: New session 13 of user core. Jan 17 12:03:48.402842 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:03:48.545737 sshd[4208]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:48.553965 systemd[1]: Started sshd@13-10.0.0.47:22-10.0.0.1:36968.service - OpenSSH per-connection server daemon (10.0.0.1:36968). Jan 17 12:03:48.555084 systemd[1]: sshd@12-10.0.0.47:22-10.0.0.1:36952.service: Deactivated successfully. Jan 17 12:03:48.562097 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:03:48.564232 systemd-logind[1522]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:03:48.569876 systemd-logind[1522]: Removed session 13. Jan 17 12:03:48.600113 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 36968 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:03:48.601477 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:48.605553 systemd-logind[1522]: New session 14 of user core. Jan 17 12:03:48.617893 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:03:48.727377 sshd[4222]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:48.731411 systemd[1]: sshd@13-10.0.0.47:22-10.0.0.1:36968.service: Deactivated successfully. Jan 17 12:03:48.731464 systemd-logind[1522]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:03:48.733391 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:03:48.734064 systemd-logind[1522]: Removed session 14. Jan 17 12:03:53.741723 systemd[1]: Started sshd@14-10.0.0.47:22-10.0.0.1:58704.service - OpenSSH per-connection server daemon (10.0.0.1:58704). Jan 17 12:03:53.774161 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 58704 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:03:53.775295 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:53.778734 systemd-logind[1522]: New session 15 of user core. Jan 17 12:03:53.784712 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:03:53.892941 sshd[4240]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:53.902719 systemd[1]: Started sshd@15-10.0.0.47:22-10.0.0.1:58718.service - OpenSSH per-connection server daemon (10.0.0.1:58718). Jan 17 12:03:53.903101 systemd[1]: sshd@14-10.0.0.47:22-10.0.0.1:58704.service: Deactivated successfully. Jan 17 12:03:53.905820 systemd-logind[1522]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:03:53.905986 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:03:53.907231 systemd-logind[1522]: Removed session 15. Jan 17 12:03:53.934655 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 58718 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:03:53.936397 sshd[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:53.941199 systemd-logind[1522]: New session 16 of user core. Jan 17 12:03:53.946772 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:03:54.149833 sshd[4252]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:54.155726 systemd[1]: Started sshd@16-10.0.0.47:22-10.0.0.1:58728.service - OpenSSH per-connection server daemon (10.0.0.1:58728). Jan 17 12:03:54.156145 systemd[1]: sshd@15-10.0.0.47:22-10.0.0.1:58718.service: Deactivated successfully. Jan 17 12:03:54.158946 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:03:54.160165 systemd-logind[1522]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:03:54.161449 systemd-logind[1522]: Removed session 16. Jan 17 12:03:54.195343 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 58728 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:03:54.196690 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:54.200568 systemd-logind[1522]: New session 17 of user core. Jan 17 12:03:54.212891 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:03:55.423758 sshd[4265]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:55.443399 systemd[1]: Started sshd@17-10.0.0.47:22-10.0.0.1:58732.service - OpenSSH per-connection server daemon (10.0.0.1:58732). Jan 17 12:03:55.443931 systemd[1]: sshd@16-10.0.0.47:22-10.0.0.1:58728.service: Deactivated successfully. Jan 17 12:03:55.449133 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:03:55.451679 systemd-logind[1522]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:03:55.454127 systemd-logind[1522]: Removed session 17. Jan 17 12:03:55.482486 sshd[4285]: Accepted publickey for core from 10.0.0.1 port 58732 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:03:55.484023 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:55.488524 systemd-logind[1522]: New session 18 of user core. Jan 17 12:03:55.500851 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:03:55.731111 sshd[4285]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:55.734991 systemd[1]: Started sshd@18-10.0.0.47:22-10.0.0.1:58742.service - OpenSSH per-connection server daemon (10.0.0.1:58742). Jan 17 12:03:55.736950 systemd[1]: sshd@17-10.0.0.47:22-10.0.0.1:58732.service: Deactivated successfully. Jan 17 12:03:55.738647 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:03:55.741635 systemd-logind[1522]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:03:55.742856 systemd-logind[1522]: Removed session 18. Jan 17 12:03:55.771596 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 58742 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:03:55.772809 sshd[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:55.776998 systemd-logind[1522]: New session 19 of user core. Jan 17 12:03:55.782760 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:03:55.888561 sshd[4301]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:55.892453 systemd[1]: sshd@18-10.0.0.47:22-10.0.0.1:58742.service: Deactivated successfully. Jan 17 12:03:55.895750 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:03:55.896611 systemd-logind[1522]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:03:55.897432 systemd-logind[1522]: Removed session 19. Jan 17 12:04:00.904720 systemd[1]: Started sshd@19-10.0.0.47:22-10.0.0.1:58750.service - OpenSSH per-connection server daemon (10.0.0.1:58750). Jan 17 12:04:00.938164 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 58750 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:04:00.938874 sshd[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:00.942564 systemd-logind[1522]: New session 20 of user core. Jan 17 12:04:00.956842 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:04:01.062951 sshd[4324]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:01.066124 systemd[1]: sshd@19-10.0.0.47:22-10.0.0.1:58750.service: Deactivated successfully. Jan 17 12:04:01.069560 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:04:01.070456 systemd-logind[1522]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:04:01.071476 systemd-logind[1522]: Removed session 20. Jan 17 12:04:06.074747 systemd[1]: Started sshd@20-10.0.0.47:22-10.0.0.1:39358.service - OpenSSH per-connection server daemon (10.0.0.1:39358). Jan 17 12:04:06.106762 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 39358 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:04:06.107968 sshd[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:06.112155 systemd-logind[1522]: New session 21 of user core. Jan 17 12:04:06.119854 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:04:06.226807 sshd[4339]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:06.230638 systemd-logind[1522]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:04:06.231192 systemd[1]: sshd@20-10.0.0.47:22-10.0.0.1:39358.service: Deactivated successfully. Jan 17 12:04:06.233622 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:04:06.234159 systemd-logind[1522]: Removed session 21. Jan 17 12:04:11.235778 systemd[1]: Started sshd@21-10.0.0.47:22-10.0.0.1:39370.service - OpenSSH per-connection server daemon (10.0.0.1:39370). Jan 17 12:04:11.269880 sshd[4354]: Accepted publickey for core from 10.0.0.1 port 39370 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:04:11.271226 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:11.278840 systemd-logind[1522]: New session 22 of user core. Jan 17 12:04:11.288855 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:04:11.401220 sshd[4354]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:11.408706 systemd[1]: Started sshd@22-10.0.0.47:22-10.0.0.1:39378.service - OpenSSH per-connection server daemon (10.0.0.1:39378). Jan 17 12:04:11.409197 systemd[1]: sshd@21-10.0.0.47:22-10.0.0.1:39370.service: Deactivated successfully. Jan 17 12:04:11.412574 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:04:11.412669 systemd-logind[1522]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:04:11.414476 systemd-logind[1522]: Removed session 22. Jan 17 12:04:11.444420 sshd[4366]: Accepted publickey for core from 10.0.0.1 port 39378 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:04:11.445049 sshd[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:11.448921 systemd-logind[1522]: New session 23 of user core. Jan 17 12:04:11.463758 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:04:13.747938 containerd[1541]: time="2025-01-17T12:04:13.747891572Z" level=info msg="StopContainer for \"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027\" with timeout 30 (s)" Jan 17 12:04:13.752909 containerd[1541]: time="2025-01-17T12:04:13.749906461Z" level=info msg="Stop container \"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027\" with signal terminated" Jan 17 12:04:13.766061 containerd[1541]: time="2025-01-17T12:04:13.765922255Z" level=info msg="StopContainer for \"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287\" with timeout 2 (s)" Jan 17 12:04:13.766179 containerd[1541]: time="2025-01-17T12:04:13.766156656Z" level=info msg="Stop container \"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287\" with signal terminated" Jan 17 12:04:13.769081 containerd[1541]: time="2025-01-17T12:04:13.768747668Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:04:13.771660 systemd-networkd[1232]: lxc_health: Link DOWN Jan 17 12:04:13.771667 systemd-networkd[1232]: lxc_health: Lost carrier Jan 17 12:04:13.797856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027-rootfs.mount: Deactivated successfully. Jan 17 12:04:13.811189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287-rootfs.mount: Deactivated successfully. Jan 17 12:04:13.811849 containerd[1541]: time="2025-01-17T12:04:13.811319624Z" level=info msg="shim disconnected" id=3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027 namespace=k8s.io Jan 17 12:04:13.811849 containerd[1541]: time="2025-01-17T12:04:13.811372664Z" level=warning msg="cleaning up after shim disconnected" id=3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027 namespace=k8s.io Jan 17 12:04:13.811849 containerd[1541]: time="2025-01-17T12:04:13.811381424Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:13.812417 containerd[1541]: time="2025-01-17T12:04:13.811958027Z" level=info msg="shim disconnected" id=ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287 namespace=k8s.io Jan 17 12:04:13.812417 containerd[1541]: time="2025-01-17T12:04:13.811999147Z" level=warning msg="cleaning up after shim disconnected" id=ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287 namespace=k8s.io Jan 17 12:04:13.812417 containerd[1541]: time="2025-01-17T12:04:13.812008147Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:13.848724 containerd[1541]: time="2025-01-17T12:04:13.848680596Z" level=info msg="StopContainer for \"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287\" returns successfully" Jan 17 12:04:13.849395 containerd[1541]: time="2025-01-17T12:04:13.849366839Z" level=info msg="StopPodSandbox for \"1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3\"" Jan 17 12:04:13.849453 containerd[1541]: time="2025-01-17T12:04:13.849410359Z" level=info msg="Container to stop \"d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:04:13.849453 containerd[1541]: time="2025-01-17T12:04:13.849422839Z" level=info msg="Container to stop \"c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:04:13.849453 containerd[1541]: time="2025-01-17T12:04:13.849432959Z" level=info msg="Container to stop \"216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:04:13.849453 containerd[1541]: time="2025-01-17T12:04:13.849443359Z" level=info msg="Container to stop \"e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:04:13.849453 containerd[1541]: time="2025-01-17T12:04:13.849452840Z" level=info msg="Container to stop \"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:04:13.850151 containerd[1541]: time="2025-01-17T12:04:13.850123843Z" level=info msg="StopContainer for \"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027\" returns successfully" Jan 17 12:04:13.851032 containerd[1541]: time="2025-01-17T12:04:13.850997287Z" level=info msg="StopPodSandbox for \"dbd80810bc5b025dd3520363b025d72488c1c504278c49716dc9445aaea61693\"" Jan 17 12:04:13.851087 containerd[1541]: time="2025-01-17T12:04:13.851035487Z" level=info msg="Container to stop \"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:04:13.851865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3-shm.mount: Deactivated successfully. Jan 17 12:04:13.854445 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbd80810bc5b025dd3520363b025d72488c1c504278c49716dc9445aaea61693-shm.mount: Deactivated successfully. Jan 17 12:04:13.880967 containerd[1541]: time="2025-01-17T12:04:13.880909344Z" level=info msg="shim disconnected" id=1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3 namespace=k8s.io Jan 17 12:04:13.880967 containerd[1541]: time="2025-01-17T12:04:13.880961185Z" level=warning msg="cleaning up after shim disconnected" id=1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3 namespace=k8s.io Jan 17 12:04:13.880967 containerd[1541]: time="2025-01-17T12:04:13.880969385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:13.881931 containerd[1541]: time="2025-01-17T12:04:13.881886469Z" level=info msg="shim disconnected" id=dbd80810bc5b025dd3520363b025d72488c1c504278c49716dc9445aaea61693 namespace=k8s.io Jan 17 12:04:13.881967 containerd[1541]: time="2025-01-17T12:04:13.881933869Z" level=warning msg="cleaning up after shim disconnected" id=dbd80810bc5b025dd3520363b025d72488c1c504278c49716dc9445aaea61693 namespace=k8s.io Jan 17 12:04:13.881967 containerd[1541]: time="2025-01-17T12:04:13.881942309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:13.895101 containerd[1541]: time="2025-01-17T12:04:13.895061770Z" level=info msg="TearDown network for sandbox \"1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3\" successfully" Jan 17 12:04:13.895101 containerd[1541]: time="2025-01-17T12:04:13.895097410Z" level=info msg="StopPodSandbox for \"1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3\" returns successfully" Jan 17 12:04:13.899776 containerd[1541]: time="2025-01-17T12:04:13.899728471Z" level=info msg="TearDown network for sandbox \"dbd80810bc5b025dd3520363b025d72488c1c504278c49716dc9445aaea61693\" successfully" Jan 17 12:04:13.899776 containerd[1541]: time="2025-01-17T12:04:13.899755111Z" level=info msg="StopPodSandbox for \"dbd80810bc5b025dd3520363b025d72488c1c504278c49716dc9445aaea61693\" returns successfully" Jan 17 12:04:14.051429 kubelet[2719]: I0117 12:04:14.051183 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-host-proc-sys-kernel\") pod \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " Jan 17 12:04:14.051429 kubelet[2719]: I0117 12:04:14.051229 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4pfb\" (UniqueName: \"kubernetes.io/projected/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-kube-api-access-h4pfb\") pod \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " Jan 17 12:04:14.051429 kubelet[2719]: I0117 12:04:14.051251 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d05e36f6-9712-4118-8564-36ed2a5cf68c-cilium-config-path\") pod \"d05e36f6-9712-4118-8564-36ed2a5cf68c\" (UID: \"d05e36f6-9712-4118-8564-36ed2a5cf68c\") " Jan 17 12:04:14.051429 kubelet[2719]: I0117 12:04:14.051271 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cilium-cgroup\") pod \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " Jan 17 12:04:14.051429 kubelet[2719]: I0117 12:04:14.051291 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cni-path\") pod \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " Jan 17 12:04:14.051429 kubelet[2719]: I0117 12:04:14.051307 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-bpf-maps\") pod \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " Jan 17 12:04:14.061435 kubelet[2719]: I0117 12:04:14.061324 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d05e36f6-9712-4118-8564-36ed2a5cf68c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d05e36f6-9712-4118-8564-36ed2a5cf68c" (UID: "d05e36f6-9712-4118-8564-36ed2a5cf68c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:04:14.061435 kubelet[2719]: I0117 12:04:14.061394 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "768ee00b-c803-487a-b8bc-67bf7ac9aaf9" (UID: "768ee00b-c803-487a-b8bc-67bf7ac9aaf9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:14.061435 kubelet[2719]: I0117 12:04:14.061415 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cni-path" (OuterVolumeSpecName: "cni-path") pod "768ee00b-c803-487a-b8bc-67bf7ac9aaf9" (UID: "768ee00b-c803-487a-b8bc-67bf7ac9aaf9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:14.061666 kubelet[2719]: I0117 12:04:14.061524 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "768ee00b-c803-487a-b8bc-67bf7ac9aaf9" (UID: "768ee00b-c803-487a-b8bc-67bf7ac9aaf9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:14.063242 kubelet[2719]: I0117 12:04:14.063203 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "768ee00b-c803-487a-b8bc-67bf7ac9aaf9" (UID: "768ee00b-c803-487a-b8bc-67bf7ac9aaf9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:14.067429 kubelet[2719]: I0117 12:04:14.067404 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-hubble-tls\") pod \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " Jan 17 12:04:14.067485 kubelet[2719]: I0117 12:04:14.067439 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cilium-run\") pod \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " Jan 17 12:04:14.067485 kubelet[2719]: I0117 12:04:14.067461 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6wlv\" (UniqueName: \"kubernetes.io/projected/d05e36f6-9712-4118-8564-36ed2a5cf68c-kube-api-access-d6wlv\") pod \"d05e36f6-9712-4118-8564-36ed2a5cf68c\" (UID: \"d05e36f6-9712-4118-8564-36ed2a5cf68c\") " Jan 17 12:04:14.067485 kubelet[2719]: I0117 12:04:14.067478 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-xtables-lock\") pod \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " Jan 17 12:04:14.067566 kubelet[2719]: I0117 12:04:14.067511 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-clustermesh-secrets\") pod \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " Jan 17 12:04:14.067566 kubelet[2719]: I0117 12:04:14.067532 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-etc-cni-netd\") pod \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " Jan 17 12:04:14.067566 kubelet[2719]: I0117 12:04:14.067551 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cilium-config-path\") pod \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " Jan 17 12:04:14.067707 kubelet[2719]: I0117 12:04:14.067570 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-hostproc\") pod \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " Jan 17 12:04:14.067707 kubelet[2719]: I0117 12:04:14.067589 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-lib-modules\") pod \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " Jan 17 12:04:14.067707 kubelet[2719]: I0117 12:04:14.067605 2719 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-host-proc-sys-net\") pod \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\" (UID: \"768ee00b-c803-487a-b8bc-67bf7ac9aaf9\") " Jan 17 12:04:14.067707 kubelet[2719]: I0117 12:04:14.067638 2719 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.067707 kubelet[2719]: I0117 12:04:14.067649 2719 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d05e36f6-9712-4118-8564-36ed2a5cf68c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.067707 kubelet[2719]: I0117 12:04:14.067660 2719 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.067707 kubelet[2719]: I0117 12:04:14.067669 2719 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.067866 kubelet[2719]: I0117 12:04:14.067678 2719 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.067866 kubelet[2719]: I0117 12:04:14.067701 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "768ee00b-c803-487a-b8bc-67bf7ac9aaf9" (UID: "768ee00b-c803-487a-b8bc-67bf7ac9aaf9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:14.067866 kubelet[2719]: I0117 12:04:14.067721 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "768ee00b-c803-487a-b8bc-67bf7ac9aaf9" (UID: "768ee00b-c803-487a-b8bc-67bf7ac9aaf9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:14.067866 kubelet[2719]: I0117 12:04:14.067733 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "768ee00b-c803-487a-b8bc-67bf7ac9aaf9" (UID: "768ee00b-c803-487a-b8bc-67bf7ac9aaf9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:14.069608 kubelet[2719]: I0117 12:04:14.069528 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "768ee00b-c803-487a-b8bc-67bf7ac9aaf9" (UID: "768ee00b-c803-487a-b8bc-67bf7ac9aaf9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:04:14.069608 kubelet[2719]: I0117 12:04:14.069570 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-hostproc" (OuterVolumeSpecName: "hostproc") pod "768ee00b-c803-487a-b8bc-67bf7ac9aaf9" (UID: "768ee00b-c803-487a-b8bc-67bf7ac9aaf9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:14.069608 kubelet[2719]: I0117 12:04:14.069588 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "768ee00b-c803-487a-b8bc-67bf7ac9aaf9" (UID: "768ee00b-c803-487a-b8bc-67bf7ac9aaf9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:14.069608 kubelet[2719]: I0117 12:04:14.069605 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "768ee00b-c803-487a-b8bc-67bf7ac9aaf9" (UID: "768ee00b-c803-487a-b8bc-67bf7ac9aaf9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:04:14.071632 kubelet[2719]: I0117 12:04:14.071581 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d05e36f6-9712-4118-8564-36ed2a5cf68c-kube-api-access-d6wlv" (OuterVolumeSpecName: "kube-api-access-d6wlv") pod "d05e36f6-9712-4118-8564-36ed2a5cf68c" (UID: "d05e36f6-9712-4118-8564-36ed2a5cf68c"). InnerVolumeSpecName "kube-api-access-d6wlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:04:14.071708 kubelet[2719]: I0117 12:04:14.071651 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "768ee00b-c803-487a-b8bc-67bf7ac9aaf9" (UID: "768ee00b-c803-487a-b8bc-67bf7ac9aaf9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:04:14.071708 kubelet[2719]: I0117 12:04:14.071690 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "768ee00b-c803-487a-b8bc-67bf7ac9aaf9" (UID: "768ee00b-c803-487a-b8bc-67bf7ac9aaf9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:04:14.071855 kubelet[2719]: I0117 12:04:14.071802 2719 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-kube-api-access-h4pfb" (OuterVolumeSpecName: "kube-api-access-h4pfb") pod "768ee00b-c803-487a-b8bc-67bf7ac9aaf9" (UID: "768ee00b-c803-487a-b8bc-67bf7ac9aaf9"). InnerVolumeSpecName "kube-api-access-h4pfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:04:14.168355 kubelet[2719]: I0117 12:04:14.168320 2719 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h4pfb\" (UniqueName: \"kubernetes.io/projected/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-kube-api-access-h4pfb\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.168355 kubelet[2719]: I0117 12:04:14.168354 2719 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.168355 kubelet[2719]: I0117 12:04:14.168365 2719 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d6wlv\" (UniqueName: \"kubernetes.io/projected/d05e36f6-9712-4118-8564-36ed2a5cf68c-kube-api-access-d6wlv\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.168480 kubelet[2719]: I0117 12:04:14.168375 2719 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.168480 kubelet[2719]: I0117 12:04:14.168385 2719 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.168480 kubelet[2719]: I0117 12:04:14.168394 2719 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.168480 kubelet[2719]: I0117 12:04:14.168403 2719 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.168480 kubelet[2719]: I0117 12:04:14.168412 2719 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.168480 kubelet[2719]: I0117 12:04:14.168421 2719 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.168480 kubelet[2719]: I0117 12:04:14.168429 2719 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.168480 kubelet[2719]: I0117 12:04:14.168438 2719 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/768ee00b-c803-487a-b8bc-67bf7ac9aaf9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 17 12:04:14.435970 kubelet[2719]: I0117 12:04:14.435853 2719 scope.go:117] "RemoveContainer" containerID="3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027" Jan 17 12:04:14.438268 containerd[1541]: time="2025-01-17T12:04:14.438224919Z" level=info msg="RemoveContainer for \"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027\"" Jan 17 12:04:14.443940 containerd[1541]: time="2025-01-17T12:04:14.443912226Z" level=info msg="RemoveContainer for \"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027\" returns successfully" Jan 17 12:04:14.444134 kubelet[2719]: I0117 12:04:14.444111 2719 scope.go:117] "RemoveContainer" containerID="3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027" Jan 17 12:04:14.444394 containerd[1541]: time="2025-01-17T12:04:14.444315588Z" level=error msg="ContainerStatus for \"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027\": not found" Jan 17 12:04:14.453113 kubelet[2719]: E0117 12:04:14.453045 2719 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027\": not found" containerID="3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027" Jan 17 12:04:14.456625 kubelet[2719]: I0117 12:04:14.456483 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027"} err="failed to get container status \"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027\": rpc error: code = NotFound desc = an error occurred when try to find container \"3356881d69e378ac71272d46f3ab25f3b4596750a6fd7119c408253ca47b9027\": not found" Jan 17 12:04:14.456625 kubelet[2719]: I0117 12:04:14.456545 2719 scope.go:117] "RemoveContainer" containerID="ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287" Jan 17 12:04:14.457710 containerd[1541]: time="2025-01-17T12:04:14.457684974Z" level=info msg="RemoveContainer for \"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287\"" Jan 17 12:04:14.460080 containerd[1541]: time="2025-01-17T12:04:14.460043825Z" level=info msg="RemoveContainer for \"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287\" returns successfully" Jan 17 12:04:14.460235 kubelet[2719]: I0117 12:04:14.460209 2719 scope.go:117] "RemoveContainer" containerID="c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77" Jan 17 12:04:14.461264 containerd[1541]: time="2025-01-17T12:04:14.461237871Z" level=info msg="RemoveContainer for \"c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77\"" Jan 17 12:04:14.463613 containerd[1541]: time="2025-01-17T12:04:14.463571403Z" level=info msg="RemoveContainer for \"c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77\" returns successfully" Jan 17 12:04:14.463773 kubelet[2719]: I0117 12:04:14.463718 2719 scope.go:117] "RemoveContainer" containerID="d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c" Jan 17 12:04:14.464573 containerd[1541]: time="2025-01-17T12:04:14.464550967Z" level=info msg="RemoveContainer for \"d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c\"" Jan 17 12:04:14.474050 containerd[1541]: time="2025-01-17T12:04:14.474007134Z" level=info msg="RemoveContainer for \"d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c\" returns successfully" Jan 17 12:04:14.474298 kubelet[2719]: I0117 12:04:14.474200 2719 scope.go:117] "RemoveContainer" containerID="e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0" Jan 17 12:04:14.475246 containerd[1541]: time="2025-01-17T12:04:14.475215900Z" level=info msg="RemoveContainer for \"e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0\"" Jan 17 12:04:14.477380 containerd[1541]: time="2025-01-17T12:04:14.477345910Z" level=info msg="RemoveContainer for \"e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0\" returns successfully" Jan 17 12:04:14.477511 kubelet[2719]: I0117 12:04:14.477483 2719 scope.go:117] "RemoveContainer" containerID="216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53" Jan 17 12:04:14.478323 containerd[1541]: time="2025-01-17T12:04:14.478300635Z" level=info msg="RemoveContainer for \"216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53\"" Jan 17 12:04:14.480365 containerd[1541]: time="2025-01-17T12:04:14.480333805Z" level=info msg="RemoveContainer for \"216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53\" returns successfully" Jan 17 12:04:14.480496 kubelet[2719]: I0117 12:04:14.480474 2719 scope.go:117] "RemoveContainer" containerID="ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287" Jan 17 12:04:14.480691 containerd[1541]: time="2025-01-17T12:04:14.480649126Z" level=error msg="ContainerStatus for \"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287\": not found" Jan 17 12:04:14.480904 kubelet[2719]: E0117 12:04:14.480823 2719 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287\": not found" containerID="ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287" Jan 17 12:04:14.481070 kubelet[2719]: I0117 12:04:14.480979 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287"} err="failed to get container status \"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae11a60d4268e7a1adc1954d954cf1b6b77cd0fc561dd956441678ec2e310287\": not found" Jan 17 12:04:14.481070 kubelet[2719]: I0117 12:04:14.481001 2719 scope.go:117] "RemoveContainer" containerID="c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77" Jan 17 12:04:14.481188 containerd[1541]: time="2025-01-17T12:04:14.481145489Z" level=error msg="ContainerStatus for \"c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77\": not found" Jan 17 12:04:14.481263 kubelet[2719]: E0117 12:04:14.481245 2719 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77\": not found" containerID="c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77" Jan 17 12:04:14.481307 kubelet[2719]: I0117 12:04:14.481273 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77"} err="failed to get container status \"c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77\": rpc error: code = NotFound desc = an error occurred when try to find container \"c643a06c41997b0a767aeaadd8eeeab1cc468e36294f61a3fcb92f3f35071e77\": not found" Jan 17 12:04:14.481307 kubelet[2719]: I0117 12:04:14.481283 2719 scope.go:117] "RemoveContainer" containerID="d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c" Jan 17 12:04:14.481477 containerd[1541]: time="2025-01-17T12:04:14.481420490Z" level=error msg="ContainerStatus for \"d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c\": not found" Jan 17 12:04:14.481533 kubelet[2719]: E0117 12:04:14.481517 2719 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c\": not found" containerID="d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c" Jan 17 12:04:14.481569 kubelet[2719]: I0117 12:04:14.481538 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c"} err="failed to get container status \"d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d303660c3f06e0db4bae4fc21949f771f72d531aa4fb0e485195b9f19f30312c\": not found" Jan 17 12:04:14.481569 kubelet[2719]: I0117 12:04:14.481546 2719 scope.go:117] "RemoveContainer" containerID="e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0" Jan 17 12:04:14.481796 containerd[1541]: time="2025-01-17T12:04:14.481736932Z" level=error msg="ContainerStatus for \"e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0\": not found" Jan 17 12:04:14.481914 kubelet[2719]: E0117 12:04:14.481887 2719 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0\": not found" containerID="e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0" Jan 17 12:04:14.481914 kubelet[2719]: I0117 12:04:14.481907 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0"} err="failed to get container status \"e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6c28fda08becdfde75b8671f4b72bebcc188c36ee49e4c16f18cdec071ca0b0\": not found" Jan 17 12:04:14.481914 kubelet[2719]: I0117 12:04:14.481916 2719 scope.go:117] "RemoveContainer" containerID="216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53" Jan 17 12:04:14.482072 containerd[1541]: time="2025-01-17T12:04:14.482043373Z" level=error msg="ContainerStatus for \"216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53\": not found" Jan 17 12:04:14.482157 kubelet[2719]: E0117 12:04:14.482131 2719 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53\": not found" containerID="216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53" Jan 17 12:04:14.482157 kubelet[2719]: I0117 12:04:14.482155 2719 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53"} err="failed to get container status \"216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53\": rpc error: code = NotFound desc = an error occurred when try to find container \"216730ec60ea25fc603580298021116aa24a502be03c3af7010411a409f32e53\": not found" Jan 17 12:04:14.747189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbd80810bc5b025dd3520363b025d72488c1c504278c49716dc9445aaea61693-rootfs.mount: Deactivated successfully. Jan 17 12:04:14.747351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1267dd7a012f14b57daadcb0d4b20e118a32fc4920457fd485e14e985e860de3-rootfs.mount: Deactivated successfully. Jan 17 12:04:14.747438 systemd[1]: var-lib-kubelet-pods-768ee00b\x2dc803\x2d487a\x2db8bc\x2d67bf7ac9aaf9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh4pfb.mount: Deactivated successfully. Jan 17 12:04:14.747554 systemd[1]: var-lib-kubelet-pods-d05e36f6\x2d9712\x2d4118\x2d8564\x2d36ed2a5cf68c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd6wlv.mount: Deactivated successfully. Jan 17 12:04:14.747643 systemd[1]: var-lib-kubelet-pods-768ee00b\x2dc803\x2d487a\x2db8bc\x2d67bf7ac9aaf9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 12:04:14.747725 systemd[1]: var-lib-kubelet-pods-768ee00b\x2dc803\x2d487a\x2db8bc\x2d67bf7ac9aaf9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 12:04:15.179773 kubelet[2719]: I0117 12:04:15.179688 2719 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="768ee00b-c803-487a-b8bc-67bf7ac9aaf9" path="/var/lib/kubelet/pods/768ee00b-c803-487a-b8bc-67bf7ac9aaf9/volumes" Jan 17 12:04:15.180255 kubelet[2719]: I0117 12:04:15.180224 2719 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d05e36f6-9712-4118-8564-36ed2a5cf68c" path="/var/lib/kubelet/pods/d05e36f6-9712-4118-8564-36ed2a5cf68c/volumes" Jan 17 12:04:15.699916 sshd[4366]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:15.712749 systemd[1]: Started sshd@23-10.0.0.47:22-10.0.0.1:38310.service - OpenSSH per-connection server daemon (10.0.0.1:38310). Jan 17 12:04:15.713233 systemd[1]: sshd@22-10.0.0.47:22-10.0.0.1:39378.service: Deactivated successfully. Jan 17 12:04:15.714750 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:04:15.715992 systemd-logind[1522]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:04:15.716964 systemd-logind[1522]: Removed session 23. Jan 17 12:04:15.744098 sshd[4538]: Accepted publickey for core from 10.0.0.1 port 38310 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:04:15.745245 sshd[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:15.749193 systemd-logind[1522]: New session 24 of user core. Jan 17 12:04:15.757723 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:04:16.920437 sshd[4538]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:16.929907 systemd[1]: Started sshd@24-10.0.0.47:22-10.0.0.1:38322.service - OpenSSH per-connection server daemon (10.0.0.1:38322). Jan 17 12:04:16.934650 systemd[1]: sshd@23-10.0.0.47:22-10.0.0.1:38310.service: Deactivated successfully. Jan 17 12:04:16.936866 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:04:16.947339 systemd-logind[1522]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:04:16.950155 kubelet[2719]: I0117 12:04:16.949962 2719 topology_manager.go:215] "Topology Admit Handler" podUID="ff8ec8eb-4d86-4e13-9882-ba963a3a075a" podNamespace="kube-system" podName="cilium-v6dxs" Jan 17 12:04:16.950155 kubelet[2719]: E0117 12:04:16.950015 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="768ee00b-c803-487a-b8bc-67bf7ac9aaf9" containerName="mount-cgroup" Jan 17 12:04:16.950155 kubelet[2719]: E0117 12:04:16.950025 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="768ee00b-c803-487a-b8bc-67bf7ac9aaf9" containerName="mount-bpf-fs" Jan 17 12:04:16.950155 kubelet[2719]: E0117 12:04:16.950033 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d05e36f6-9712-4118-8564-36ed2a5cf68c" containerName="cilium-operator" Jan 17 12:04:16.950155 kubelet[2719]: E0117 12:04:16.950040 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="768ee00b-c803-487a-b8bc-67bf7ac9aaf9" containerName="apply-sysctl-overwrites" Jan 17 12:04:16.950155 kubelet[2719]: E0117 12:04:16.950046 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="768ee00b-c803-487a-b8bc-67bf7ac9aaf9" containerName="clean-cilium-state" Jan 17 12:04:16.950155 kubelet[2719]: E0117 12:04:16.950053 2719 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="768ee00b-c803-487a-b8bc-67bf7ac9aaf9" containerName="cilium-agent" Jan 17 12:04:16.952713 kubelet[2719]: I0117 12:04:16.952690 2719 memory_manager.go:354] "RemoveStaleState removing state" podUID="768ee00b-c803-487a-b8bc-67bf7ac9aaf9" containerName="cilium-agent" Jan 17 12:04:16.952825 kubelet[2719]: I0117 12:04:16.952811 2719 memory_manager.go:354] "RemoveStaleState removing state" podUID="d05e36f6-9712-4118-8564-36ed2a5cf68c" containerName="cilium-operator" Jan 17 12:04:16.955422 systemd-logind[1522]: Removed session 24. Jan 17 12:04:16.982106 sshd[4551]: Accepted publickey for core from 10.0.0.1 port 38322 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:04:16.983362 sshd[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:16.984101 kubelet[2719]: I0117 12:04:16.983945 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stnb6\" (UniqueName: \"kubernetes.io/projected/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-kube-api-access-stnb6\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.984101 kubelet[2719]: I0117 12:04:16.983987 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-hostproc\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.984101 kubelet[2719]: I0117 12:04:16.984008 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-lib-modules\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.984101 kubelet[2719]: I0117 12:04:16.984066 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-cilium-cgroup\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.984245 kubelet[2719]: I0117 12:04:16.984120 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-etc-cni-netd\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.984245 kubelet[2719]: I0117 12:04:16.984142 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-host-proc-sys-net\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.984245 kubelet[2719]: I0117 12:04:16.984166 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-clustermesh-secrets\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.984245 kubelet[2719]: I0117 12:04:16.984185 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-hubble-tls\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.984245 kubelet[2719]: I0117 12:04:16.984204 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-bpf-maps\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.984245 kubelet[2719]: I0117 12:04:16.984222 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-cni-path\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.984374 kubelet[2719]: I0117 12:04:16.984243 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-host-proc-sys-kernel\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.984374 kubelet[2719]: I0117 12:04:16.984287 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-cilium-ipsec-secrets\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.984374 kubelet[2719]: I0117 12:04:16.984327 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-cilium-run\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.984374 kubelet[2719]: I0117 12:04:16.984353 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-xtables-lock\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.984374 kubelet[2719]: I0117 12:04:16.984373 2719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff8ec8eb-4d86-4e13-9882-ba963a3a075a-cilium-config-path\") pod \"cilium-v6dxs\" (UID: \"ff8ec8eb-4d86-4e13-9882-ba963a3a075a\") " pod="kube-system/cilium-v6dxs" Jan 17 12:04:16.987441 systemd-logind[1522]: New session 25 of user core. Jan 17 12:04:16.998734 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:04:17.047929 sshd[4551]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:17.056711 systemd[1]: Started sshd@25-10.0.0.47:22-10.0.0.1:38332.service - OpenSSH per-connection server daemon (10.0.0.1:38332). Jan 17 12:04:17.057091 systemd[1]: sshd@24-10.0.0.47:22-10.0.0.1:38322.service: Deactivated successfully. Jan 17 12:04:17.059866 systemd-logind[1522]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:04:17.060043 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:04:17.061177 systemd-logind[1522]: Removed session 25. Jan 17 12:04:17.090528 sshd[4560]: Accepted publickey for core from 10.0.0.1 port 38332 ssh2: RSA SHA256:xsFjL0Ru499iNfhRyIcjP6wTIWZ5oE8f5Pm6hYv+KHo Jan 17 12:04:17.093008 sshd[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:17.102959 systemd-logind[1522]: New session 26 of user core. Jan 17 12:04:17.111721 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:04:17.244061 kubelet[2719]: E0117 12:04:17.244028 2719 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:04:17.258823 kubelet[2719]: E0117 12:04:17.258788 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:17.259231 containerd[1541]: time="2025-01-17T12:04:17.259107126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v6dxs,Uid:ff8ec8eb-4d86-4e13-9882-ba963a3a075a,Namespace:kube-system,Attempt:0,}" Jan 17 12:04:17.277085 containerd[1541]: time="2025-01-17T12:04:17.276947948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:04:17.277085 containerd[1541]: time="2025-01-17T12:04:17.277022348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:04:17.277085 containerd[1541]: time="2025-01-17T12:04:17.277033308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:04:17.277244 containerd[1541]: time="2025-01-17T12:04:17.277187069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:04:17.308650 containerd[1541]: time="2025-01-17T12:04:17.308612169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v6dxs,Uid:ff8ec8eb-4d86-4e13-9882-ba963a3a075a,Namespace:kube-system,Attempt:0,} returns sandbox id \"24c2b2592f9f05d06bb380641f17b5116a3bdcf890aa1ea6f66c58cb2d8850d0\"" Jan 17 12:04:17.309298 kubelet[2719]: E0117 12:04:17.309277 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:17.311478 containerd[1541]: time="2025-01-17T12:04:17.311450905Z" level=info msg="CreateContainer within sandbox \"24c2b2592f9f05d06bb380641f17b5116a3bdcf890aa1ea6f66c58cb2d8850d0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:04:17.320422 containerd[1541]: time="2025-01-17T12:04:17.320370516Z" level=info msg="CreateContainer within sandbox \"24c2b2592f9f05d06bb380641f17b5116a3bdcf890aa1ea6f66c58cb2d8850d0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a3b731a98cf6bc197b8bff2c7dae07a1e452076cc0defa928bb5507e9f9d9154\"" Jan 17 12:04:17.320981 containerd[1541]: time="2025-01-17T12:04:17.320952839Z" level=info msg="StartContainer for \"a3b731a98cf6bc197b8bff2c7dae07a1e452076cc0defa928bb5507e9f9d9154\"" Jan 17 12:04:17.363275 containerd[1541]: time="2025-01-17T12:04:17.362011113Z" level=info msg="StartContainer for \"a3b731a98cf6bc197b8bff2c7dae07a1e452076cc0defa928bb5507e9f9d9154\" returns successfully" Jan 17 12:04:17.407166 containerd[1541]: time="2025-01-17T12:04:17.407108851Z" level=info msg="shim disconnected" id=a3b731a98cf6bc197b8bff2c7dae07a1e452076cc0defa928bb5507e9f9d9154 namespace=k8s.io Jan 17 12:04:17.407166 containerd[1541]: time="2025-01-17T12:04:17.407161251Z" level=warning msg="cleaning up after shim disconnected" id=a3b731a98cf6bc197b8bff2c7dae07a1e452076cc0defa928bb5507e9f9d9154 namespace=k8s.io Jan 17 12:04:17.407166 containerd[1541]: time="2025-01-17T12:04:17.407169571Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:17.438460 kubelet[2719]: E0117 12:04:17.438419 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:17.441924 containerd[1541]: time="2025-01-17T12:04:17.441868369Z" level=info msg="CreateContainer within sandbox \"24c2b2592f9f05d06bb380641f17b5116a3bdcf890aa1ea6f66c58cb2d8850d0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:04:17.452938 containerd[1541]: time="2025-01-17T12:04:17.452819111Z" level=info msg="CreateContainer within sandbox \"24c2b2592f9f05d06bb380641f17b5116a3bdcf890aa1ea6f66c58cb2d8850d0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"735c40d4ce448f5c4c39cb3156d7396979142413d5c0cb94ec61ce4ebdf810c2\"" Jan 17 12:04:17.453576 containerd[1541]: time="2025-01-17T12:04:17.453537796Z" level=info msg="StartContainer for \"735c40d4ce448f5c4c39cb3156d7396979142413d5c0cb94ec61ce4ebdf810c2\"" Jan 17 12:04:17.501149 containerd[1541]: time="2025-01-17T12:04:17.500631144Z" level=info msg="StartContainer for \"735c40d4ce448f5c4c39cb3156d7396979142413d5c0cb94ec61ce4ebdf810c2\" returns successfully" Jan 17 12:04:17.528784 containerd[1541]: time="2025-01-17T12:04:17.528716185Z" level=info msg="shim disconnected" id=735c40d4ce448f5c4c39cb3156d7396979142413d5c0cb94ec61ce4ebdf810c2 namespace=k8s.io Jan 17 12:04:17.528784 containerd[1541]: time="2025-01-17T12:04:17.528773425Z" level=warning msg="cleaning up after shim disconnected" id=735c40d4ce448f5c4c39cb3156d7396979142413d5c0cb94ec61ce4ebdf810c2 namespace=k8s.io Jan 17 12:04:17.528784 containerd[1541]: time="2025-01-17T12:04:17.528783505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:18.442102 kubelet[2719]: E0117 12:04:18.441971 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:18.446396 containerd[1541]: time="2025-01-17T12:04:18.445365568Z" level=info msg="CreateContainer within sandbox \"24c2b2592f9f05d06bb380641f17b5116a3bdcf890aa1ea6f66c58cb2d8850d0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:04:18.469287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3744424301.mount: Deactivated successfully. Jan 17 12:04:18.470737 containerd[1541]: time="2025-01-17T12:04:18.470613438Z" level=info msg="CreateContainer within sandbox \"24c2b2592f9f05d06bb380641f17b5116a3bdcf890aa1ea6f66c58cb2d8850d0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"af521ee4a86464f0d3b64b6a100f49ced21a12adbf15cf2417e3a8fa85503f93\"" Jan 17 12:04:18.471214 containerd[1541]: time="2025-01-17T12:04:18.471184882Z" level=info msg="StartContainer for \"af521ee4a86464f0d3b64b6a100f49ced21a12adbf15cf2417e3a8fa85503f93\"" Jan 17 12:04:18.535454 containerd[1541]: time="2025-01-17T12:04:18.535390304Z" level=info msg="StartContainer for \"af521ee4a86464f0d3b64b6a100f49ced21a12adbf15cf2417e3a8fa85503f93\" returns successfully" Jan 17 12:04:18.560155 containerd[1541]: time="2025-01-17T12:04:18.560093972Z" level=info msg="shim disconnected" id=af521ee4a86464f0d3b64b6a100f49ced21a12adbf15cf2417e3a8fa85503f93 namespace=k8s.io Jan 17 12:04:18.564573 containerd[1541]: time="2025-01-17T12:04:18.564531478Z" level=warning msg="cleaning up after shim disconnected" id=af521ee4a86464f0d3b64b6a100f49ced21a12adbf15cf2417e3a8fa85503f93 namespace=k8s.io Jan 17 12:04:18.564573 containerd[1541]: time="2025-01-17T12:04:18.564571678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:19.079358 kubelet[2719]: I0117 12:04:19.079302 2719 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-17T12:04:19Z","lastTransitionTime":"2025-01-17T12:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 12:04:19.090782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af521ee4a86464f0d3b64b6a100f49ced21a12adbf15cf2417e3a8fa85503f93-rootfs.mount: Deactivated successfully. Jan 17 12:04:19.178261 kubelet[2719]: E0117 12:04:19.177948 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:19.445294 kubelet[2719]: E0117 12:04:19.444867 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:19.448625 containerd[1541]: time="2025-01-17T12:04:19.448587456Z" level=info msg="CreateContainer within sandbox \"24c2b2592f9f05d06bb380641f17b5116a3bdcf890aa1ea6f66c58cb2d8850d0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:04:19.461766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427718623.mount: Deactivated successfully. Jan 17 12:04:19.462456 containerd[1541]: time="2025-01-17T12:04:19.462423582Z" level=info msg="CreateContainer within sandbox \"24c2b2592f9f05d06bb380641f17b5116a3bdcf890aa1ea6f66c58cb2d8850d0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"21e782d4629887f46052f9ef8ff3fbe03d3e0bdb0d21547dc68dcebbfe202aa8\"" Jan 17 12:04:19.463177 containerd[1541]: time="2025-01-17T12:04:19.463149187Z" level=info msg="StartContainer for \"21e782d4629887f46052f9ef8ff3fbe03d3e0bdb0d21547dc68dcebbfe202aa8\"" Jan 17 12:04:19.508580 containerd[1541]: time="2025-01-17T12:04:19.508494348Z" level=info msg="StartContainer for \"21e782d4629887f46052f9ef8ff3fbe03d3e0bdb0d21547dc68dcebbfe202aa8\" returns successfully" Jan 17 12:04:19.525909 containerd[1541]: time="2025-01-17T12:04:19.525856296Z" level=info msg="shim disconnected" id=21e782d4629887f46052f9ef8ff3fbe03d3e0bdb0d21547dc68dcebbfe202aa8 namespace=k8s.io Jan 17 12:04:19.525909 containerd[1541]: time="2025-01-17T12:04:19.525903376Z" level=warning msg="cleaning up after shim disconnected" id=21e782d4629887f46052f9ef8ff3fbe03d3e0bdb0d21547dc68dcebbfe202aa8 namespace=k8s.io Jan 17 12:04:19.525909 containerd[1541]: time="2025-01-17T12:04:19.525911896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:20.090425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21e782d4629887f46052f9ef8ff3fbe03d3e0bdb0d21547dc68dcebbfe202aa8-rootfs.mount: Deactivated successfully. Jan 17 12:04:20.448043 kubelet[2719]: E0117 12:04:20.447944 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:20.455690 containerd[1541]: time="2025-01-17T12:04:20.453438319Z" level=info msg="CreateContainer within sandbox \"24c2b2592f9f05d06bb380641f17b5116a3bdcf890aa1ea6f66c58cb2d8850d0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:04:20.468997 containerd[1541]: time="2025-01-17T12:04:20.468958099Z" level=info msg="CreateContainer within sandbox \"24c2b2592f9f05d06bb380641f17b5116a3bdcf890aa1ea6f66c58cb2d8850d0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ad958fa63ce80207c49c9a51b13fbc3e2a5f348f9f16f04536832bad3c798737\"" Jan 17 12:04:20.469556 containerd[1541]: time="2025-01-17T12:04:20.469532303Z" level=info msg="StartContainer for \"ad958fa63ce80207c49c9a51b13fbc3e2a5f348f9f16f04536832bad3c798737\"" Jan 17 12:04:20.517113 containerd[1541]: time="2025-01-17T12:04:20.517072689Z" level=info msg="StartContainer for \"ad958fa63ce80207c49c9a51b13fbc3e2a5f348f9f16f04536832bad3c798737\" returns successfully" Jan 17 12:04:20.760527 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 17 12:04:21.178614 kubelet[2719]: E0117 12:04:21.178417 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:21.454409 kubelet[2719]: E0117 12:04:21.454307 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:21.468911 kubelet[2719]: I0117 12:04:21.468834 2719 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-v6dxs" podStartSLOduration=5.468798569 podStartE2EDuration="5.468798569s" podCreationTimestamp="2025-01-17 12:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:04:21.468331486 +0000 UTC m=+84.386755774" watchObservedRunningTime="2025-01-17 12:04:21.468798569 +0000 UTC m=+84.387222857" Jan 17 12:04:23.261445 kubelet[2719]: E0117 12:04:23.260836 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:23.607773 systemd-networkd[1232]: lxc_health: Link UP Jan 17 12:04:23.615244 systemd-networkd[1232]: lxc_health: Gained carrier Jan 17 12:04:24.178587 kubelet[2719]: E0117 12:04:24.178548 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:24.847933 systemd-networkd[1232]: lxc_health: Gained IPv6LL Jan 17 12:04:25.263786 kubelet[2719]: E0117 12:04:25.263743 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:25.463728 kubelet[2719]: E0117 12:04:25.463694 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:25.582714 systemd[1]: run-containerd-runc-k8s.io-ad958fa63ce80207c49c9a51b13fbc3e2a5f348f9f16f04536832bad3c798737-runc.NHdyPR.mount: Deactivated successfully. Jan 17 12:04:26.465587 kubelet[2719]: E0117 12:04:26.465374 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:27.180991 kubelet[2719]: E0117 12:04:27.180860 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:04:29.843664 sshd[4560]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:29.846148 systemd[1]: sshd@25-10.0.0.47:22-10.0.0.1:38332.service: Deactivated successfully. Jan 17 12:04:29.849043 systemd-logind[1522]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:04:29.850542 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:04:29.851931 systemd-logind[1522]: Removed session 26.