Jan 30 12:49:57.906222 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 12:49:57.906244 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 12:49:57.906254 kernel: KASLR enabled Jan 30 12:49:57.906260 kernel: efi: EFI v2.7 by EDK II Jan 30 12:49:57.906266 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 30 12:49:57.906272 kernel: random: crng init done Jan 30 12:49:57.906279 kernel: ACPI: Early table checksum verification disabled Jan 30 12:49:57.906285 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 30 12:49:57.906291 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 12:49:57.906299 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:49:57.906305 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:49:57.906311 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:49:57.906317 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:49:57.906323 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:49:57.906330 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:49:57.906338 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:49:57.906345 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:49:57.906352 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:49:57.906358 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 12:49:57.906364 kernel: NUMA: Failed to initialise from firmware Jan 30 12:49:57.906370 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:49:57.906377 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 30 12:49:57.906383 kernel: Zone ranges: Jan 30 12:49:57.906390 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:49:57.906396 kernel: DMA32 empty Jan 30 12:49:57.906404 kernel: Normal empty Jan 30 12:49:57.906410 kernel: Movable zone start for each node Jan 30 12:49:57.906417 kernel: Early memory node ranges Jan 30 12:49:57.906423 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 30 12:49:57.906429 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 12:49:57.906436 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 12:49:57.906442 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 12:49:57.906448 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 12:49:57.906455 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 12:49:57.906461 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 12:49:57.906467 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:49:57.906473 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 12:49:57.906481 kernel: psci: probing for conduit method from ACPI. Jan 30 12:49:57.906488 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 12:49:57.906494 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 12:49:57.906503 kernel: psci: Trusted OS migration not required Jan 30 12:49:57.906510 kernel: psci: SMC Calling Convention v1.1 Jan 30 12:49:57.906517 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 12:49:57.906525 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 12:49:57.906532 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 12:49:57.906539 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 12:49:57.906545 kernel: Detected PIPT I-cache on CPU0 Jan 30 12:49:57.906552 kernel: CPU features: detected: GIC system register CPU interface Jan 30 12:49:57.906559 kernel: CPU features: detected: Hardware dirty bit management Jan 30 12:49:57.906566 kernel: CPU features: detected: Spectre-v4 Jan 30 12:49:57.906572 kernel: CPU features: detected: Spectre-BHB Jan 30 12:49:57.906579 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 12:49:57.906586 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 12:49:57.906594 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 12:49:57.906601 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 12:49:57.906608 kernel: alternatives: applying boot alternatives Jan 30 12:49:57.906629 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 12:49:57.906637 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 12:49:57.906644 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 12:49:57.906651 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 12:49:57.906657 kernel: Fallback order for Node 0: 0 Jan 30 12:49:57.906671 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 12:49:57.906692 kernel: Policy zone: DMA Jan 30 12:49:57.906699 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 12:49:57.906708 kernel: software IO TLB: area num 4. Jan 30 12:49:57.906716 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 12:49:57.906723 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 30 12:49:57.906730 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 12:49:57.906736 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 12:49:57.906744 kernel: rcu: RCU event tracing is enabled. Jan 30 12:49:57.906751 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 12:49:57.906757 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 12:49:57.906764 kernel: Tracing variant of Tasks RCU enabled. Jan 30 12:49:57.906771 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 12:49:57.906778 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 12:49:57.906785 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 12:49:57.906793 kernel: GICv3: 256 SPIs implemented Jan 30 12:49:57.906800 kernel: GICv3: 0 Extended SPIs implemented Jan 30 12:49:57.906806 kernel: Root IRQ handler: gic_handle_irq Jan 30 12:49:57.906813 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 12:49:57.906820 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 12:49:57.906827 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 12:49:57.906834 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 12:49:57.906840 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 12:49:57.906848 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 12:49:57.906855 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 12:49:57.906862 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 12:49:57.906870 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:49:57.906877 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 12:49:57.906884 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 12:49:57.906890 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 12:49:57.906897 kernel: arm-pv: using stolen time PV Jan 30 12:49:57.906904 kernel: Console: colour dummy device 80x25 Jan 30 12:49:57.906911 kernel: ACPI: Core revision 20230628 Jan 30 12:49:57.906918 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 12:49:57.906925 kernel: pid_max: default: 32768 minimum: 301 Jan 30 12:49:57.906932 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 12:49:57.906940 kernel: landlock: Up and running. Jan 30 12:49:57.906947 kernel: SELinux: Initializing. Jan 30 12:49:57.906954 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 12:49:57.906961 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 12:49:57.906968 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 12:49:57.906975 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 12:49:57.906982 kernel: rcu: Hierarchical SRCU implementation. Jan 30 12:49:57.906989 kernel: rcu: Max phase no-delay instances is 400. Jan 30 12:49:57.906996 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 12:49:57.907005 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 12:49:57.907012 kernel: Remapping and enabling EFI services. Jan 30 12:49:57.907020 kernel: smp: Bringing up secondary CPUs ... Jan 30 12:49:57.907026 kernel: Detected PIPT I-cache on CPU1 Jan 30 12:49:57.907034 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 12:49:57.907041 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 12:49:57.907048 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:49:57.907055 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 12:49:57.907062 kernel: Detected PIPT I-cache on CPU2 Jan 30 12:49:57.907070 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 12:49:57.907078 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 12:49:57.907086 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:49:57.907097 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 12:49:57.907106 kernel: Detected PIPT I-cache on CPU3 Jan 30 12:49:57.907114 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 12:49:57.907121 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 12:49:57.907129 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:49:57.907136 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 12:49:57.907144 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 12:49:57.907153 kernel: SMP: Total of 4 processors activated. Jan 30 12:49:57.907160 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 12:49:57.907168 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 12:49:57.907175 kernel: CPU features: detected: Common not Private translations Jan 30 12:49:57.907183 kernel: CPU features: detected: CRC32 instructions Jan 30 12:49:57.907190 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 12:49:57.907197 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 12:49:57.907205 kernel: CPU features: detected: LSE atomic instructions Jan 30 12:49:57.907214 kernel: CPU features: detected: Privileged Access Never Jan 30 12:49:57.907221 kernel: CPU features: detected: RAS Extension Support Jan 30 12:49:57.907229 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 12:49:57.907236 kernel: CPU: All CPU(s) started at EL1 Jan 30 12:49:57.907244 kernel: alternatives: applying system-wide alternatives Jan 30 12:49:57.907251 kernel: devtmpfs: initialized Jan 30 12:49:57.907259 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 12:49:57.907266 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 12:49:57.907273 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 12:49:57.907283 kernel: SMBIOS 3.0.0 present. Jan 30 12:49:57.907290 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 30 12:49:57.907298 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 12:49:57.907305 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 12:49:57.907313 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 12:49:57.907320 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 12:49:57.907328 kernel: audit: initializing netlink subsys (disabled) Jan 30 12:49:57.907335 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jan 30 12:49:57.907342 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 12:49:57.907352 kernel: cpuidle: using governor menu Jan 30 12:49:57.907359 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 12:49:57.907367 kernel: ASID allocator initialised with 32768 entries Jan 30 12:49:57.907374 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 12:49:57.907382 kernel: Serial: AMBA PL011 UART driver Jan 30 12:49:57.907389 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 12:49:57.907397 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 12:49:57.907404 kernel: Modules: 509040 pages in range for PLT usage Jan 30 12:49:57.907411 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 12:49:57.907420 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 12:49:57.907428 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 12:49:57.907435 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 12:49:57.907443 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 12:49:57.907450 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 12:49:57.907457 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 12:49:57.907465 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 12:49:57.907472 kernel: ACPI: Added _OSI(Module Device) Jan 30 12:49:57.907479 kernel: ACPI: Added _OSI(Processor Device) Jan 30 12:49:57.907491 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 12:49:57.907498 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 12:49:57.907506 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 12:49:57.907513 kernel: ACPI: Interpreter enabled Jan 30 12:49:57.907521 kernel: ACPI: Using GIC for interrupt routing Jan 30 12:49:57.907528 kernel: ACPI: MCFG table detected, 1 entries Jan 30 12:49:57.907536 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 12:49:57.907543 kernel: printk: console [ttyAMA0] enabled Jan 30 12:49:57.907551 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 12:49:57.907714 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 12:49:57.907792 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 12:49:57.907862 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 12:49:57.907930 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 12:49:57.907997 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 12:49:57.908007 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 12:49:57.908015 kernel: PCI host bridge to bus 0000:00 Jan 30 12:49:57.908092 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 12:49:57.908154 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 12:49:57.908215 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 12:49:57.908277 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 12:49:57.908361 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 12:49:57.908441 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 12:49:57.908513 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 12:49:57.908584 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 12:49:57.908680 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 12:49:57.908753 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 12:49:57.908821 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 12:49:57.908887 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 12:49:57.908948 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 12:49:57.909008 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 12:49:57.909072 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 12:49:57.909082 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 12:49:57.909089 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 12:49:57.909097 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 12:49:57.909104 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 12:49:57.909112 kernel: iommu: Default domain type: Translated Jan 30 12:49:57.909119 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 12:49:57.909127 kernel: efivars: Registered efivars operations Jan 30 12:49:57.909136 kernel: vgaarb: loaded Jan 30 12:49:57.909144 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 12:49:57.909151 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 12:49:57.909159 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 12:49:57.909166 kernel: pnp: PnP ACPI init Jan 30 12:49:57.909244 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 12:49:57.909255 kernel: pnp: PnP ACPI: found 1 devices Jan 30 12:49:57.909262 kernel: NET: Registered PF_INET protocol family Jan 30 12:49:57.909272 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 12:49:57.909280 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 12:49:57.909287 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 12:49:57.909295 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 12:49:57.909302 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 12:49:57.909310 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 12:49:57.909317 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 12:49:57.909325 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 12:49:57.909332 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 12:49:57.909342 kernel: PCI: CLS 0 bytes, default 64 Jan 30 12:49:57.909349 kernel: kvm [1]: HYP mode not available Jan 30 12:49:57.909356 kernel: Initialise system trusted keyrings Jan 30 12:49:57.909364 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 12:49:57.909371 kernel: Key type asymmetric registered Jan 30 12:49:57.909378 kernel: Asymmetric key parser 'x509' registered Jan 30 12:49:57.909386 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 12:49:57.909393 kernel: io scheduler mq-deadline registered Jan 30 12:49:57.909400 kernel: io scheduler kyber registered Jan 30 12:49:57.909409 kernel: io scheduler bfq registered Jan 30 12:49:57.909416 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 12:49:57.909423 kernel: ACPI: button: Power Button [PWRB] Jan 30 12:49:57.909431 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 12:49:57.909497 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 12:49:57.909507 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 12:49:57.909514 kernel: thunder_xcv, ver 1.0 Jan 30 12:49:57.909521 kernel: thunder_bgx, ver 1.0 Jan 30 12:49:57.909528 kernel: nicpf, ver 1.0 Jan 30 12:49:57.909537 kernel: nicvf, ver 1.0 Jan 30 12:49:57.909620 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 12:49:57.909702 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T12:49:57 UTC (1738241397) Jan 30 12:49:57.909713 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 12:49:57.909720 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 12:49:57.909728 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 12:49:57.909735 kernel: watchdog: Hard watchdog permanently disabled Jan 30 12:49:57.909742 kernel: NET: Registered PF_INET6 protocol family Jan 30 12:49:57.909752 kernel: Segment Routing with IPv6 Jan 30 12:49:57.909759 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 12:49:57.909767 kernel: NET: Registered PF_PACKET protocol family Jan 30 12:49:57.909774 kernel: Key type dns_resolver registered Jan 30 12:49:57.909781 kernel: registered taskstats version 1 Jan 30 12:49:57.909788 kernel: Loading compiled-in X.509 certificates Jan 30 12:49:57.909796 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 12:49:57.909803 kernel: Key type .fscrypt registered Jan 30 12:49:57.909810 kernel: Key type fscrypt-provisioning registered Jan 30 12:49:57.909819 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 12:49:57.909826 kernel: ima: Allocated hash algorithm: sha1 Jan 30 12:49:57.909834 kernel: ima: No architecture policies found Jan 30 12:49:57.909841 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 12:49:57.909848 kernel: clk: Disabling unused clocks Jan 30 12:49:57.909855 kernel: Freeing unused kernel memory: 39360K Jan 30 12:49:57.909862 kernel: Run /init as init process Jan 30 12:49:57.909869 kernel: with arguments: Jan 30 12:49:57.909877 kernel: /init Jan 30 12:49:57.909885 kernel: with environment: Jan 30 12:49:57.909892 kernel: HOME=/ Jan 30 12:49:57.909899 kernel: TERM=linux Jan 30 12:49:57.909907 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 12:49:57.909916 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:49:57.909926 systemd[1]: Detected virtualization kvm. Jan 30 12:49:57.909934 systemd[1]: Detected architecture arm64. Jan 30 12:49:57.909942 systemd[1]: Running in initrd. Jan 30 12:49:57.909951 systemd[1]: No hostname configured, using default hostname. Jan 30 12:49:57.909959 systemd[1]: Hostname set to <localhost>. Jan 30 12:49:57.909967 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:49:57.909975 systemd[1]: Queued start job for default target initrd.target. Jan 30 12:49:57.909983 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:49:57.909991 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:49:57.909999 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 12:49:57.910007 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:49:57.910017 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 12:49:57.910025 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 12:49:57.910035 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 12:49:57.910043 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 12:49:57.910051 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:49:57.910059 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:49:57.910073 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:49:57.910082 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:49:57.910090 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:49:57.910097 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:49:57.910106 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:49:57.910114 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:49:57.910122 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 12:49:57.910130 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 12:49:57.910138 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:49:57.910148 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:49:57.910156 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:49:57.910164 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:49:57.910175 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 12:49:57.910184 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:49:57.910192 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 12:49:57.910200 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 12:49:57.910208 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:49:57.910216 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:49:57.910230 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:49:57.910240 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 12:49:57.910248 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:49:57.910256 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 12:49:57.910283 systemd-journald[238]: Collecting audit messages is disabled. Jan 30 12:49:57.910304 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:49:57.910313 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:49:57.910321 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:49:57.910331 systemd-journald[238]: Journal started Jan 30 12:49:57.910353 systemd-journald[238]: Runtime Journal (/run/log/journal/13a0846240624418b2b01f421c6a5bfa) is 5.9M, max 47.3M, 41.4M free. Jan 30 12:49:57.897833 systemd-modules-load[239]: Inserted module 'overlay' Jan 30 12:49:57.914202 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:49:57.915658 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:49:57.918016 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 12:49:57.918534 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 30 12:49:57.919275 kernel: Bridge firewalling registered Jan 30 12:49:57.927091 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:49:57.929037 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:49:57.930196 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:49:57.933953 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:49:57.935539 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:49:57.938064 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 12:49:57.939806 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:49:57.943077 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:49:57.951908 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:49:57.953416 dracut-cmdline[268]: dracut-dracut-053 Jan 30 12:49:57.954339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:49:57.955937 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 12:49:57.996570 systemd-resolved[286]: Positive Trust Anchors: Jan 30 12:49:57.996590 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:49:57.996687 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:49:58.003051 systemd-resolved[286]: Defaulting to hostname 'linux'. Jan 30 12:49:58.007890 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:49:58.008831 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:49:58.036643 kernel: SCSI subsystem initialized Jan 30 12:49:58.040638 kernel: Loading iSCSI transport class v2.0-870. Jan 30 12:49:58.048644 kernel: iscsi: registered transport (tcp) Jan 30 12:49:58.061650 kernel: iscsi: registered transport (qla4xxx) Jan 30 12:49:58.061684 kernel: QLogic iSCSI HBA Driver Jan 30 12:49:58.109396 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 12:49:58.122816 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 12:49:58.140106 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 12:49:58.140170 kernel: device-mapper: uevent: version 1.0.3 Jan 30 12:49:58.140182 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 12:49:58.188642 kernel: raid6: neonx8 gen() 15628 MB/s Jan 30 12:49:58.205633 kernel: raid6: neonx4 gen() 15483 MB/s Jan 30 12:49:58.222632 kernel: raid6: neonx2 gen() 13168 MB/s Jan 30 12:49:58.239640 kernel: raid6: neonx1 gen() 10419 MB/s Jan 30 12:49:58.256640 kernel: raid6: int64x8 gen() 6143 MB/s Jan 30 12:49:58.273629 kernel: raid6: int64x4 gen() 7209 MB/s Jan 30 12:49:58.290636 kernel: raid6: int64x2 gen() 6065 MB/s Jan 30 12:49:58.307652 kernel: raid6: int64x1 gen() 4964 MB/s Jan 30 12:49:58.307713 kernel: raid6: using algorithm neonx8 gen() 15628 MB/s Jan 30 12:49:58.324643 kernel: raid6: .... xor() 11903 MB/s, rmw enabled Jan 30 12:49:58.324665 kernel: raid6: using neon recovery algorithm Jan 30 12:49:58.331772 kernel: xor: measuring software checksum speed Jan 30 12:49:58.331790 kernel: 8regs : 19793 MB/sec Jan 30 12:49:58.332924 kernel: 32regs : 17788 MB/sec Jan 30 12:49:58.332938 kernel: arm64_neon : 27034 MB/sec Jan 30 12:49:58.332948 kernel: xor: using function: arm64_neon (27034 MB/sec) Jan 30 12:49:58.387654 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 12:49:58.400679 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:49:58.411808 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:49:58.424242 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jan 30 12:49:58.427639 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:49:58.433803 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 12:49:58.447521 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jan 30 12:49:58.476520 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:49:58.488818 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:49:58.529924 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:49:58.537807 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 12:49:58.553730 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 12:49:58.555289 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:49:58.557861 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:49:58.559046 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:49:58.568834 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 12:49:58.576349 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 12:49:58.583074 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 12:49:58.583178 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 12:49:58.583191 kernel: GPT:9289727 != 19775487 Jan 30 12:49:58.583200 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 12:49:58.583210 kernel: GPT:9289727 != 19775487 Jan 30 12:49:58.583219 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 12:49:58.583228 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:49:58.580159 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:49:58.589933 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:49:58.590044 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:49:58.593990 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:49:58.594988 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:49:58.595232 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:49:58.597751 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:49:58.605156 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (520) Jan 30 12:49:58.608650 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (511) Jan 30 12:49:58.610902 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:49:58.625083 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 12:49:58.626173 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:49:58.631853 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 12:49:58.638273 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 12:49:58.639213 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 12:49:58.644604 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:49:58.657790 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 12:49:58.659391 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:49:58.665343 disk-uuid[548]: Primary Header is updated. Jan 30 12:49:58.665343 disk-uuid[548]: Secondary Entries is updated. Jan 30 12:49:58.665343 disk-uuid[548]: Secondary Header is updated. Jan 30 12:49:58.672514 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:49:58.679183 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:49:59.687652 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:49:59.688122 disk-uuid[549]: The operation has completed successfully. Jan 30 12:49:59.714049 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 12:49:59.714181 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 12:49:59.734867 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 12:49:59.738580 sh[573]: Success Jan 30 12:49:59.755699 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 12:49:59.792158 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 12:49:59.801238 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 12:49:59.804712 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 12:49:59.814166 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 12:49:59.814210 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:49:59.814221 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 12:49:59.815155 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 12:49:59.815735 kernel: BTRFS info (device dm-0): using free space tree Jan 30 12:49:59.819520 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 12:49:59.820742 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 12:49:59.830804 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 12:49:59.833052 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 12:49:59.842971 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:49:59.843015 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:49:59.843032 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:49:59.846879 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:49:59.853533 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 12:49:59.854819 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:49:59.862390 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 12:49:59.870798 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 12:49:59.935521 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:49:59.949041 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:49:59.973577 ignition[670]: Ignition 2.19.0 Jan 30 12:49:59.973588 ignition[670]: Stage: fetch-offline Jan 30 12:49:59.973814 systemd-networkd[765]: lo: Link UP Jan 30 12:49:59.973636 ignition[670]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:49:59.973818 systemd-networkd[765]: lo: Gained carrier Jan 30 12:49:59.973653 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:49:59.974557 systemd-networkd[765]: Enumeration completed Jan 30 12:49:59.973811 ignition[670]: parsed url from cmdline: "" Jan 30 12:49:59.975029 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:49:59.973814 ignition[670]: no config URL provided Jan 30 12:49:59.975032 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:49:59.973819 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 12:49:59.975104 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:49:59.973827 ignition[670]: no config at "/usr/lib/ignition/user.ign" Jan 30 12:49:59.976043 systemd[1]: Reached target network.target - Network. Jan 30 12:49:59.973849 ignition[670]: op(1): [started] loading QEMU firmware config module Jan 30 12:49:59.978930 systemd-networkd[765]: eth0: Link UP Jan 30 12:49:59.973854 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 12:49:59.978933 systemd-networkd[765]: eth0: Gained carrier Jan 30 12:49:59.978940 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:49:59.993022 ignition[670]: op(1): [finished] loading QEMU firmware config module Jan 30 12:50:00.001678 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 12:50:00.044794 ignition[670]: parsing config with SHA512: eca720da00ccf08585995669d25d9b53092aafcadad7606b5a9eaf11e98092ada4106056f5375b876f4e652de8dc6fff8506cac5262d25d7a2cc748b52f2288a Jan 30 12:50:00.051285 unknown[670]: fetched base config from "system" Jan 30 12:50:00.051310 unknown[670]: fetched user config from "qemu" Jan 30 12:50:00.052205 ignition[670]: fetch-offline: fetch-offline passed Jan 30 12:50:00.052297 ignition[670]: Ignition finished successfully Jan 30 12:50:00.054131 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:50:00.056595 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 12:50:00.063769 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 12:50:00.075090 ignition[771]: Ignition 2.19.0 Jan 30 12:50:00.075100 ignition[771]: Stage: kargs Jan 30 12:50:00.075275 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:50:00.075284 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:50:00.076235 ignition[771]: kargs: kargs passed Jan 30 12:50:00.076278 ignition[771]: Ignition finished successfully Jan 30 12:50:00.079341 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 12:50:00.087759 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 12:50:00.097340 ignition[779]: Ignition 2.19.0 Jan 30 12:50:00.097350 ignition[779]: Stage: disks Jan 30 12:50:00.097521 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:50:00.097531 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:50:00.098493 ignition[779]: disks: disks passed Jan 30 12:50:00.098540 ignition[779]: Ignition finished successfully Jan 30 12:50:00.101074 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 12:50:00.102041 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 12:50:00.103305 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 12:50:00.104823 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:50:00.106271 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:50:00.107591 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:50:00.121803 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 12:50:00.131567 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 12:50:00.135682 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 12:50:00.142756 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 12:50:00.184566 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 12:50:00.185740 kernel: EXT4-fs (vda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 12:50:00.185662 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 12:50:00.194720 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:50:00.196687 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 12:50:00.197467 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 12:50:00.197506 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 12:50:00.197529 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:50:00.205685 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Jan 30 12:50:00.207281 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 12:50:00.210476 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:50:00.210500 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:50:00.210510 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:50:00.210519 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:50:00.215770 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 12:50:00.217371 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:50:00.259333 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 12:50:00.263578 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jan 30 12:50:00.266621 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 12:50:00.269726 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 12:50:00.344123 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 12:50:00.365741 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 12:50:00.367176 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 12:50:00.372629 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:50:00.388368 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 12:50:00.390273 ignition[912]: INFO : Ignition 2.19.0 Jan 30 12:50:00.390273 ignition[912]: INFO : Stage: mount Jan 30 12:50:00.391508 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:50:00.391508 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:50:00.391508 ignition[912]: INFO : mount: mount passed Jan 30 12:50:00.391508 ignition[912]: INFO : Ignition finished successfully Jan 30 12:50:00.392787 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 12:50:00.404747 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 12:50:00.813770 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 12:50:00.822815 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:50:00.828954 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) Jan 30 12:50:00.828986 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:50:00.828997 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:50:00.830106 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:50:00.832646 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:50:00.833168 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:50:00.849339 ignition[942]: INFO : Ignition 2.19.0 Jan 30 12:50:00.849339 ignition[942]: INFO : Stage: files Jan 30 12:50:00.851064 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:50:00.851064 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:50:00.851064 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jan 30 12:50:00.854687 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 12:50:00.854687 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 12:50:00.854687 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 12:50:00.854687 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 12:50:00.854687 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 12:50:00.853885 unknown[942]: wrote ssh authorized keys file for user: core Jan 30 12:50:00.862321 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 12:50:00.862321 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 12:50:00.862321 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 12:50:00.862321 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 12:50:01.094902 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 12:50:01.298628 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 12:50:01.298628 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 12:50:01.302531 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 12:50:01.617842 systemd-networkd[765]: eth0: Gained IPv6LL Jan 30 12:50:01.625057 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 30 12:50:01.681412 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 12:50:01.683475 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 30 12:50:01.683475 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 12:50:01.683475 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 12:50:01.683475 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 12:50:01.683475 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 12:50:01.683475 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 12:50:01.683475 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 12:50:01.683475 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 12:50:01.683475 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:50:01.683475 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:50:01.683475 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:50:01.683475 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:50:01.683475 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:50:01.683475 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 12:50:01.930945 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 30 12:50:02.123766 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:50:02.123766 ignition[942]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 30 12:50:02.127313 ignition[942]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 12:50:02.127313 ignition[942]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 12:50:02.127313 ignition[942]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 30 12:50:02.127313 ignition[942]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 30 12:50:02.127313 ignition[942]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 12:50:02.127313 ignition[942]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 12:50:02.127313 ignition[942]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 30 12:50:02.127313 ignition[942]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 30 12:50:02.127313 ignition[942]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 12:50:02.127313 ignition[942]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 12:50:02.127313 ignition[942]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 30 12:50:02.127313 ignition[942]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 12:50:02.169638 ignition[942]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 12:50:02.174747 ignition[942]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 12:50:02.176395 ignition[942]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 12:50:02.176395 ignition[942]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 30 12:50:02.176395 ignition[942]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 12:50:02.176395 ignition[942]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:50:02.176395 ignition[942]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:50:02.176395 ignition[942]: INFO : files: files passed Jan 30 12:50:02.176395 ignition[942]: INFO : Ignition finished successfully Jan 30 12:50:02.176657 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 12:50:02.190803 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 12:50:02.193805 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 12:50:02.200178 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 12:50:02.200271 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 12:50:02.206895 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 12:50:02.211058 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:50:02.211058 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:50:02.214404 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:50:02.215681 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:50:02.217476 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 12:50:02.232867 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 12:50:02.255740 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 12:50:02.255868 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 12:50:02.257894 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 12:50:02.259577 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 12:50:02.261223 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 12:50:02.262092 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 12:50:02.278553 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:50:02.280824 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 12:50:02.293203 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:50:02.295810 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:50:02.297204 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 12:50:02.301071 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 12:50:02.301201 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:50:02.304818 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 12:50:02.307723 systemd[1]: Stopped target basic.target - Basic System. Jan 30 12:50:02.309563 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 12:50:02.311535 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:50:02.313781 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 12:50:02.316132 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 12:50:02.318935 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:50:02.320570 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 12:50:02.322957 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 12:50:02.328437 systemd[1]: Stopped target swap.target - Swaps. Jan 30 12:50:02.329439 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 12:50:02.329577 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:50:02.332108 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:50:02.334175 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:50:02.336268 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 12:50:02.339676 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:50:02.340968 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 12:50:02.341117 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 12:50:02.344018 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 12:50:02.344135 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:50:02.346235 systemd[1]: Stopped target paths.target - Path Units. Jan 30 12:50:02.347980 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 12:50:02.348149 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:50:02.350145 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 12:50:02.351710 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 12:50:02.353468 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 12:50:02.353563 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:50:02.355805 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 12:50:02.355888 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:50:02.357524 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 12:50:02.357657 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:50:02.359550 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 12:50:02.359674 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 12:50:02.375857 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 12:50:02.376806 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 12:50:02.376953 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:50:02.379691 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 12:50:02.380557 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 12:50:02.380715 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:50:02.383017 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 12:50:02.388107 ignition[997]: INFO : Ignition 2.19.0 Jan 30 12:50:02.388107 ignition[997]: INFO : Stage: umount Jan 30 12:50:02.388107 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:50:02.388107 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:50:02.388107 ignition[997]: INFO : umount: umount passed Jan 30 12:50:02.388107 ignition[997]: INFO : Ignition finished successfully Jan 30 12:50:02.383134 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:50:02.389084 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 12:50:02.389176 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 12:50:02.390899 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 12:50:02.390991 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 12:50:02.393304 systemd[1]: Stopped target network.target - Network. Jan 30 12:50:02.394796 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 12:50:02.394869 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 12:50:02.396406 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 12:50:02.396452 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 12:50:02.398566 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 12:50:02.398628 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 12:50:02.400794 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 12:50:02.400841 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 12:50:02.402867 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 12:50:02.405038 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 12:50:02.407835 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 12:50:02.416212 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 12:50:02.416328 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 12:50:02.417665 systemd-networkd[765]: eth0: DHCPv6 lease lost Jan 30 12:50:02.418880 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 12:50:02.418967 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:50:02.420735 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 12:50:02.422797 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 12:50:02.424324 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 12:50:02.424383 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:50:02.433789 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 12:50:02.435321 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 12:50:02.435400 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:50:02.437496 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:50:02.437541 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:50:02.439321 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 12:50:02.439365 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 12:50:02.441462 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:50:02.452522 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 12:50:02.452677 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 12:50:02.460379 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 12:50:02.460558 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:50:02.465083 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 12:50:02.465134 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 12:50:02.468182 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 12:50:02.468218 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:50:02.470219 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 12:50:02.470273 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:50:02.473479 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 12:50:02.473537 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 12:50:02.476536 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:50:02.476591 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:50:02.485789 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 12:50:02.486912 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 12:50:02.486979 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:50:02.489286 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:50:02.489330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:50:02.491682 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 12:50:02.492655 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 12:50:02.493863 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 12:50:02.493960 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 12:50:02.496407 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 12:50:02.497711 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 12:50:02.497775 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 12:50:02.500371 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 12:50:02.511253 systemd[1]: Switching root. Jan 30 12:50:02.530312 systemd-journald[238]: Journal stopped Jan 30 12:50:03.298767 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 30 12:50:03.298829 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 12:50:03.298847 kernel: SELinux: policy capability open_perms=1 Jan 30 12:50:03.298858 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 12:50:03.298868 kernel: SELinux: policy capability always_check_network=0 Jan 30 12:50:03.298881 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 12:50:03.298892 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 12:50:03.298902 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 12:50:03.298912 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 12:50:03.298923 kernel: audit: type=1403 audit(1738241402.736:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 12:50:03.298935 systemd[1]: Successfully loaded SELinux policy in 33.185ms. Jan 30 12:50:03.298954 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.096ms. Jan 30 12:50:03.298972 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:50:03.298985 systemd[1]: Detected virtualization kvm. Jan 30 12:50:03.298999 systemd[1]: Detected architecture arm64. Jan 30 12:50:03.299021 systemd[1]: Detected first boot. Jan 30 12:50:03.299038 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:50:03.299048 zram_generator::config[1058]: No configuration found. Jan 30 12:50:03.299060 systemd[1]: Populated /etc with preset unit settings. Jan 30 12:50:03.299077 systemd[1]: Queued start job for default target multi-user.target. Jan 30 12:50:03.299088 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 12:50:03.299100 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 12:50:03.299113 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 12:50:03.299125 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 12:50:03.299136 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 12:50:03.299146 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 12:50:03.299157 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 12:50:03.299168 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 12:50:03.299178 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 12:50:03.299188 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:50:03.299209 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:50:03.299221 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 12:50:03.299232 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 12:50:03.299243 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 12:50:03.299253 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:50:03.299264 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 12:50:03.299275 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:50:03.299291 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 12:50:03.299301 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:50:03.299312 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:50:03.299327 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:50:03.299337 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:50:03.299348 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 12:50:03.299359 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 12:50:03.299370 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 12:50:03.299381 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 12:50:03.299392 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:50:03.299403 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:50:03.299416 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:50:03.299427 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 12:50:03.299438 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 12:50:03.299449 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 12:50:03.299459 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 12:50:03.299470 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 12:50:03.299481 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 12:50:03.299492 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 12:50:03.299502 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 12:50:03.299515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:50:03.299527 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:50:03.299537 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 12:50:03.299549 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:50:03.299560 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:50:03.299571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:50:03.299582 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 12:50:03.299593 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:50:03.299606 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 12:50:03.299640 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 12:50:03.299654 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 12:50:03.299666 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:50:03.299677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:50:03.299689 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 12:50:03.299699 kernel: loop: module loaded Jan 30 12:50:03.299709 kernel: fuse: init (API version 7.39) Jan 30 12:50:03.299720 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 12:50:03.299754 systemd-journald[1133]: Collecting audit messages is disabled. Jan 30 12:50:03.299782 systemd-journald[1133]: Journal started Jan 30 12:50:03.299805 systemd-journald[1133]: Runtime Journal (/run/log/journal/13a0846240624418b2b01f421c6a5bfa) is 5.9M, max 47.3M, 41.4M free. Jan 30 12:50:03.303467 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:50:03.305657 kernel: ACPI: bus type drm_connector registered Jan 30 12:50:03.308653 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:50:03.309598 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 12:50:03.310855 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 12:50:03.311810 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 12:50:03.312735 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 12:50:03.314025 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 12:50:03.315149 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 12:50:03.316282 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:50:03.317631 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 12:50:03.317815 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 12:50:03.319475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:50:03.319683 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:50:03.320958 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:50:03.321125 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:50:03.322196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:50:03.322365 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:50:03.323957 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 12:50:03.324178 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 12:50:03.325313 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:50:03.325539 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:50:03.326815 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:50:03.327978 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 12:50:03.329481 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 12:50:03.330867 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 12:50:03.343186 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 12:50:03.355717 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 12:50:03.357826 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 12:50:03.358800 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 12:50:03.362806 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 12:50:03.366457 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 12:50:03.367841 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:50:03.369181 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 12:50:03.370256 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:50:03.373483 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:50:03.374765 systemd-journald[1133]: Time spent on flushing to /var/log/journal/13a0846240624418b2b01f421c6a5bfa is 24.596ms for 846 entries. Jan 30 12:50:03.374765 systemd-journald[1133]: System Journal (/var/log/journal/13a0846240624418b2b01f421c6a5bfa) is 8.0M, max 195.6M, 187.6M free. Jan 30 12:50:03.405107 systemd-journald[1133]: Received client request to flush runtime journal. Jan 30 12:50:03.376225 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:50:03.381760 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:50:03.384463 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 12:50:03.385943 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 12:50:03.388356 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 12:50:03.390928 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 12:50:03.405857 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 12:50:03.409367 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 12:50:03.411109 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:50:03.416552 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 30 12:50:03.416575 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 30 12:50:03.419810 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 12:50:03.420697 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:50:03.433826 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 12:50:03.456819 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 12:50:03.468819 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:50:03.482838 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jan 30 12:50:03.482860 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jan 30 12:50:03.487055 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:50:03.846396 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 12:50:03.857916 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:50:03.880058 systemd-udevd[1220]: Using default interface naming scheme 'v255'. Jan 30 12:50:03.899854 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:50:03.912922 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:50:03.931921 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 12:50:03.940969 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 30 12:50:03.977645 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1238) Jan 30 12:50:04.003187 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:50:04.004944 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 12:50:04.026716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:50:04.040201 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 12:50:04.047524 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 12:50:04.081692 lvm[1256]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:50:04.091384 systemd-networkd[1228]: lo: Link UP Jan 30 12:50:04.091392 systemd-networkd[1228]: lo: Gained carrier Jan 30 12:50:04.092235 systemd-networkd[1228]: Enumeration completed Jan 30 12:50:04.092367 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:50:04.093104 systemd-networkd[1228]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:50:04.093114 systemd-networkd[1228]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:50:04.093737 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:50:04.094112 systemd-networkd[1228]: eth0: Link UP Jan 30 12:50:04.094123 systemd-networkd[1228]: eth0: Gained carrier Jan 30 12:50:04.094135 systemd-networkd[1228]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:50:04.113874 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 12:50:04.115580 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 12:50:04.117487 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:50:04.119877 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 12:50:04.120681 systemd-networkd[1228]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 12:50:04.128099 lvm[1266]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:50:04.160404 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 12:50:04.161850 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 12:50:04.162999 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 12:50:04.163055 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:50:04.163920 systemd[1]: Reached target machines.target - Containers. Jan 30 12:50:04.166070 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 12:50:04.179244 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 12:50:04.181481 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 12:50:04.182631 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:50:04.183799 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 12:50:04.186861 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 12:50:04.189905 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 12:50:04.194465 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 12:50:04.203379 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 12:50:04.210659 kernel: loop0: detected capacity change from 0 to 114328 Jan 30 12:50:04.221658 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 12:50:04.253672 kernel: loop1: detected capacity change from 0 to 194096 Jan 30 12:50:04.323678 kernel: loop2: detected capacity change from 0 to 114432 Jan 30 12:50:04.327282 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 12:50:04.328183 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 12:50:04.368659 kernel: loop3: detected capacity change from 0 to 114328 Jan 30 12:50:04.376632 kernel: loop4: detected capacity change from 0 to 194096 Jan 30 12:50:04.386637 kernel: loop5: detected capacity change from 0 to 114432 Jan 30 12:50:04.393016 (sd-merge)[1288]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 12:50:04.393421 (sd-merge)[1288]: Merged extensions into '/usr'. Jan 30 12:50:04.397166 systemd[1]: Reloading requested from client PID 1274 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 12:50:04.397184 systemd[1]: Reloading... Jan 30 12:50:04.448659 zram_generator::config[1322]: No configuration found. Jan 30 12:50:04.530695 ldconfig[1270]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 12:50:04.543936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:50:04.588479 systemd[1]: Reloading finished in 190 ms. Jan 30 12:50:04.602720 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 12:50:04.603945 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 12:50:04.614909 systemd[1]: Starting ensure-sysext.service... Jan 30 12:50:04.616761 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:50:04.621370 systemd[1]: Reloading requested from client PID 1358 ('systemctl') (unit ensure-sysext.service)... Jan 30 12:50:04.621383 systemd[1]: Reloading... Jan 30 12:50:04.634836 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 12:50:04.635107 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 12:50:04.635752 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 12:50:04.635982 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Jan 30 12:50:04.636041 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Jan 30 12:50:04.639731 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:50:04.639745 systemd-tmpfiles[1359]: Skipping /boot Jan 30 12:50:04.649751 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:50:04.649767 systemd-tmpfiles[1359]: Skipping /boot Jan 30 12:50:04.661645 zram_generator::config[1384]: No configuration found. Jan 30 12:50:04.771516 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:50:04.815763 systemd[1]: Reloading finished in 194 ms. Jan 30 12:50:04.831781 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:50:04.848239 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 12:50:04.851264 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 12:50:04.853546 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 12:50:04.858810 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:50:04.865006 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 12:50:04.869830 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:50:04.871986 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:50:04.878066 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:50:04.883774 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:50:04.885752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:50:04.886891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:50:04.887134 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:50:04.896421 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:50:04.896636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:50:04.898867 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:50:04.899086 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:50:04.901754 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:50:04.902182 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:50:04.905644 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 12:50:04.909835 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 12:50:04.914723 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:50:04.923092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:50:04.926014 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:50:04.930794 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:50:04.931728 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:50:04.941184 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 12:50:04.943149 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:50:04.947390 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:50:04.948988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:50:04.949137 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:50:04.950875 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:50:04.951034 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:50:04.958330 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:50:04.963902 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:50:04.968854 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:50:04.970661 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:50:04.976911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:50:04.978103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:50:04.979095 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 12:50:04.980489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:50:04.980678 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:50:04.981891 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:50:04.982025 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:50:04.985814 systemd[1]: Finished ensure-sysext.service. Jan 30 12:50:04.991060 augenrules[1489]: No rules Jan 30 12:50:04.990449 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:50:04.990632 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:50:04.992801 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 12:50:04.993000 systemd-resolved[1434]: Positive Trust Anchors: Jan 30 12:50:04.994513 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:50:04.994874 systemd-resolved[1434]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:50:04.994912 systemd-resolved[1434]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:50:04.995420 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:50:04.999257 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 12:50:05.002697 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:50:05.002898 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:50:05.006473 systemd-resolved[1434]: Defaulting to hostname 'linux'. Jan 30 12:50:05.010839 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 12:50:05.011719 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 12:50:05.011965 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:50:05.013123 systemd[1]: Reached target network.target - Network. Jan 30 12:50:05.013947 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:50:05.060506 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 12:50:05.530298 systemd-resolved[1434]: Clock change detected. Flushing caches. Jan 30 12:50:05.530336 systemd-timesyncd[1502]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 12:50:05.530383 systemd-timesyncd[1502]: Initial clock synchronization to Thu 2025-01-30 12:50:05.530237 UTC. Jan 30 12:50:05.530932 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:50:05.531804 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 12:50:05.532845 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 12:50:05.533785 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 12:50:05.534731 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 12:50:05.534765 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:50:05.535449 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 12:50:05.536354 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 12:50:05.537314 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 12:50:05.538337 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:50:05.539867 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 12:50:05.542172 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 12:50:05.544156 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 12:50:05.550651 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 12:50:05.551510 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:50:05.552343 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:50:05.553249 systemd[1]: System is tainted: cgroupsv1 Jan 30 12:50:05.553298 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:50:05.553317 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:50:05.554626 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 12:50:05.556598 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 12:50:05.558364 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 12:50:05.562725 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 12:50:05.563638 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 12:50:05.564778 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 12:50:05.570688 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 12:50:05.572472 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 12:50:05.573772 jq[1508]: false Jan 30 12:50:05.578372 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 12:50:05.582013 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 12:50:05.587750 extend-filesystems[1509]: Found loop3 Jan 30 12:50:05.594690 extend-filesystems[1509]: Found loop4 Jan 30 12:50:05.594690 extend-filesystems[1509]: Found loop5 Jan 30 12:50:05.594690 extend-filesystems[1509]: Found vda Jan 30 12:50:05.594690 extend-filesystems[1509]: Found vda1 Jan 30 12:50:05.594690 extend-filesystems[1509]: Found vda2 Jan 30 12:50:05.594690 extend-filesystems[1509]: Found vda3 Jan 30 12:50:05.594690 extend-filesystems[1509]: Found usr Jan 30 12:50:05.594690 extend-filesystems[1509]: Found vda4 Jan 30 12:50:05.594690 extend-filesystems[1509]: Found vda6 Jan 30 12:50:05.594690 extend-filesystems[1509]: Found vda7 Jan 30 12:50:05.594690 extend-filesystems[1509]: Found vda9 Jan 30 12:50:05.594690 extend-filesystems[1509]: Checking size of /dev/vda9 Jan 30 12:50:05.640173 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1239) Jan 30 12:50:05.588399 dbus-daemon[1507]: [system] SELinux support is enabled Jan 30 12:50:05.590167 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 12:50:05.644139 extend-filesystems[1509]: Resized partition /dev/vda9 Jan 30 12:50:05.592948 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 12:50:05.598572 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 12:50:05.600360 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 12:50:05.606032 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 12:50:05.648912 jq[1530]: true Jan 30 12:50:05.606273 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 12:50:05.606532 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 12:50:05.606794 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 12:50:05.608519 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 12:50:05.608758 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 12:50:05.620823 (ntainerd)[1542]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 12:50:05.622361 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 12:50:05.653970 extend-filesystems[1535]: resize2fs 1.47.1 (20-May-2024) Jan 30 12:50:05.622388 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 12:50:05.638120 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 12:50:05.658595 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 12:50:05.638144 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 12:50:05.661983 jq[1544]: true Jan 30 12:50:05.672422 update_engine[1526]: I20250130 12:50:05.671839 1526 main.cc:92] Flatcar Update Engine starting Jan 30 12:50:05.675638 tar[1534]: linux-arm64/helm Jan 30 12:50:05.680328 update_engine[1526]: I20250130 12:50:05.680203 1526 update_check_scheduler.cc:74] Next update check in 11m28s Jan 30 12:50:05.680157 systemd[1]: Started update-engine.service - Update Engine. Jan 30 12:50:05.682941 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 12:50:05.686816 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 12:50:05.713590 systemd-logind[1519]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 12:50:05.716601 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 12:50:05.720147 systemd-logind[1519]: New seat seat0. Jan 30 12:50:05.720864 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 12:50:05.738660 extend-filesystems[1535]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 12:50:05.738660 extend-filesystems[1535]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 12:50:05.738660 extend-filesystems[1535]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 12:50:05.743898 extend-filesystems[1509]: Resized filesystem in /dev/vda9 Jan 30 12:50:05.742726 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 12:50:05.742991 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 12:50:05.767297 bash[1573]: Updated "/home/core/.ssh/authorized_keys" Jan 30 12:50:05.769048 locksmithd[1554]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 12:50:05.771072 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 12:50:05.774750 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 12:50:05.886918 sshd_keygen[1546]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 12:50:05.909446 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 12:50:05.920719 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 12:50:05.924576 containerd[1542]: time="2025-01-30T12:50:05.923649215Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 12:50:05.927140 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 12:50:05.927386 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 12:50:05.939935 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 12:50:05.952120 containerd[1542]: time="2025-01-30T12:50:05.952073615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:50:05.953334 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 12:50:05.953536 containerd[1542]: time="2025-01-30T12:50:05.953501735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:50:05.953599 containerd[1542]: time="2025-01-30T12:50:05.953539095Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 12:50:05.953675 containerd[1542]: time="2025-01-30T12:50:05.953630815Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 12:50:05.953834 containerd[1542]: time="2025-01-30T12:50:05.953807255Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 12:50:05.953873 containerd[1542]: time="2025-01-30T12:50:05.953843695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 12:50:05.953924 containerd[1542]: time="2025-01-30T12:50:05.953903175Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:50:05.953951 containerd[1542]: time="2025-01-30T12:50:05.953924695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:50:05.955020 containerd[1542]: time="2025-01-30T12:50:05.954142775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:50:05.955020 containerd[1542]: time="2025-01-30T12:50:05.954166815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 12:50:05.955020 containerd[1542]: time="2025-01-30T12:50:05.954181375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:50:05.955020 containerd[1542]: time="2025-01-30T12:50:05.954191215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 12:50:05.955020 containerd[1542]: time="2025-01-30T12:50:05.954264855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:50:05.955020 containerd[1542]: time="2025-01-30T12:50:05.954454175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:50:05.955020 containerd[1542]: time="2025-01-30T12:50:05.954634735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:50:05.955020 containerd[1542]: time="2025-01-30T12:50:05.954652655Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 12:50:05.955020 containerd[1542]: time="2025-01-30T12:50:05.954733015Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 12:50:05.955020 containerd[1542]: time="2025-01-30T12:50:05.954772375Z" level=info msg="metadata content store policy set" policy=shared Jan 30 12:50:05.964054 containerd[1542]: time="2025-01-30T12:50:05.963999775Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 12:50:05.964054 containerd[1542]: time="2025-01-30T12:50:05.964067095Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 12:50:05.964177 containerd[1542]: time="2025-01-30T12:50:05.964085455Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 12:50:05.964177 containerd[1542]: time="2025-01-30T12:50:05.964103815Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 12:50:05.964177 containerd[1542]: time="2025-01-30T12:50:05.964123375Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 12:50:05.964329 containerd[1542]: time="2025-01-30T12:50:05.964306095Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 12:50:05.965440 containerd[1542]: time="2025-01-30T12:50:05.965395415Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 12:50:05.965738 containerd[1542]: time="2025-01-30T12:50:05.965710535Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 12:50:05.965765 containerd[1542]: time="2025-01-30T12:50:05.965741495Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 12:50:05.965765 containerd[1542]: time="2025-01-30T12:50:05.965758455Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 12:50:05.965800 containerd[1542]: time="2025-01-30T12:50:05.965774255Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 12:50:05.965800 containerd[1542]: time="2025-01-30T12:50:05.965790095Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 12:50:05.965846 containerd[1542]: time="2025-01-30T12:50:05.965803415Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 12:50:05.965846 containerd[1542]: time="2025-01-30T12:50:05.965818895Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 12:50:05.965846 containerd[1542]: time="2025-01-30T12:50:05.965834735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 12:50:05.965895 containerd[1542]: time="2025-01-30T12:50:05.965849615Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 12:50:05.965895 containerd[1542]: time="2025-01-30T12:50:05.965862815Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 12:50:05.965895 containerd[1542]: time="2025-01-30T12:50:05.965875615Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 12:50:05.965947 containerd[1542]: time="2025-01-30T12:50:05.965898695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.965947 containerd[1542]: time="2025-01-30T12:50:05.965913095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.965947 containerd[1542]: time="2025-01-30T12:50:05.965926415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.965947 containerd[1542]: time="2025-01-30T12:50:05.965939215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.966019 containerd[1542]: time="2025-01-30T12:50:05.965953135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.966019 containerd[1542]: time="2025-01-30T12:50:05.965967095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.966019 containerd[1542]: time="2025-01-30T12:50:05.965979615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.966019 containerd[1542]: time="2025-01-30T12:50:05.965992855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.966019 containerd[1542]: time="2025-01-30T12:50:05.966006295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.966105 containerd[1542]: time="2025-01-30T12:50:05.966022255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.966105 containerd[1542]: time="2025-01-30T12:50:05.966035455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.966105 containerd[1542]: time="2025-01-30T12:50:05.966047575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.966105 containerd[1542]: time="2025-01-30T12:50:05.966062495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.966105 containerd[1542]: time="2025-01-30T12:50:05.966082615Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 12:50:05.966105 containerd[1542]: time="2025-01-30T12:50:05.966105095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.966210 containerd[1542]: time="2025-01-30T12:50:05.966119895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.966210 containerd[1542]: time="2025-01-30T12:50:05.966132175Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 12:50:05.966342 containerd[1542]: time="2025-01-30T12:50:05.966320015Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 12:50:05.966368 containerd[1542]: time="2025-01-30T12:50:05.966343735Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 12:50:05.966368 containerd[1542]: time="2025-01-30T12:50:05.966356215Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 12:50:05.966413 containerd[1542]: time="2025-01-30T12:50:05.966368935Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 12:50:05.966413 containerd[1542]: time="2025-01-30T12:50:05.966380375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.966413 containerd[1542]: time="2025-01-30T12:50:05.966394695Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 12:50:05.966413 containerd[1542]: time="2025-01-30T12:50:05.966405255Z" level=info msg="NRI interface is disabled by configuration." Jan 30 12:50:05.966484 containerd[1542]: time="2025-01-30T12:50:05.966416815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 12:50:05.966810 containerd[1542]: time="2025-01-30T12:50:05.966743335Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 12:50:05.966932 containerd[1542]: time="2025-01-30T12:50:05.966810775Z" level=info msg="Connect containerd service" Jan 30 12:50:05.966932 containerd[1542]: time="2025-01-30T12:50:05.966849255Z" level=info msg="using legacy CRI server" Jan 30 12:50:05.966932 containerd[1542]: time="2025-01-30T12:50:05.966856175Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 12:50:05.967005 containerd[1542]: time="2025-01-30T12:50:05.966970855Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 12:50:05.967913 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 12:50:05.970099 containerd[1542]: time="2025-01-30T12:50:05.970055895Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:50:05.971036 containerd[1542]: time="2025-01-30T12:50:05.970516935Z" level=info msg="Start subscribing containerd event" Jan 30 12:50:05.971036 containerd[1542]: time="2025-01-30T12:50:05.970602655Z" level=info msg="Start recovering state" Jan 30 12:50:05.971036 containerd[1542]: time="2025-01-30T12:50:05.970680815Z" level=info msg="Start event monitor" Jan 30 12:50:05.971036 containerd[1542]: time="2025-01-30T12:50:05.970693535Z" level=info msg="Start snapshots syncer" Jan 30 12:50:05.971036 containerd[1542]: time="2025-01-30T12:50:05.970704255Z" level=info msg="Start cni network conf syncer for default" Jan 30 12:50:05.971036 containerd[1542]: time="2025-01-30T12:50:05.970712975Z" level=info msg="Start streaming server" Jan 30 12:50:05.971036 containerd[1542]: time="2025-01-30T12:50:05.970882295Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 12:50:05.971036 containerd[1542]: time="2025-01-30T12:50:05.970926775Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 12:50:05.971036 containerd[1542]: time="2025-01-30T12:50:05.970982295Z" level=info msg="containerd successfully booted in 0.048418s" Jan 30 12:50:05.978904 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 12:50:05.980122 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 12:50:05.981231 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 12:50:06.073957 tar[1534]: linux-arm64/LICENSE Jan 30 12:50:06.073957 tar[1534]: linux-arm64/README.md Jan 30 12:50:06.088292 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 12:50:06.374689 systemd-networkd[1228]: eth0: Gained IPv6LL Jan 30 12:50:06.380259 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 12:50:06.382085 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 12:50:06.395852 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 12:50:06.398685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:50:06.400782 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 12:50:06.417902 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 12:50:06.418281 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 12:50:06.420881 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 12:50:06.434042 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 12:50:06.914080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:50:06.916086 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 12:50:06.918135 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:50:06.920658 systemd[1]: Startup finished in 5.592s (kernel) + 3.752s (userspace) = 9.345s. Jan 30 12:50:07.559283 kubelet[1643]: E0130 12:50:07.558784 1643 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:50:07.561288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:50:07.561457 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:50:11.475794 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 12:50:11.488860 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:57440.service - OpenSSH per-connection server daemon (10.0.0.1:57440). Jan 30 12:50:11.560704 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 57440 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:50:11.564425 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:50:11.571635 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 12:50:11.577820 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 12:50:11.579354 systemd-logind[1519]: New session 1 of user core. Jan 30 12:50:11.587511 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 12:50:11.590839 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 12:50:11.597616 (systemd)[1663]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 12:50:11.681024 systemd[1663]: Queued start job for default target default.target. Jan 30 12:50:11.681664 systemd[1663]: Created slice app.slice - User Application Slice. Jan 30 12:50:11.681698 systemd[1663]: Reached target paths.target - Paths. Jan 30 12:50:11.681710 systemd[1663]: Reached target timers.target - Timers. Jan 30 12:50:11.696677 systemd[1663]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 12:50:11.702701 systemd[1663]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 12:50:11.702765 systemd[1663]: Reached target sockets.target - Sockets. Jan 30 12:50:11.702778 systemd[1663]: Reached target basic.target - Basic System. Jan 30 12:50:11.702816 systemd[1663]: Reached target default.target - Main User Target. Jan 30 12:50:11.702849 systemd[1663]: Startup finished in 99ms. Jan 30 12:50:11.703174 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 12:50:11.705127 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 12:50:11.760802 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:57448.service - OpenSSH per-connection server daemon (10.0.0.1:57448). Jan 30 12:50:11.790226 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 57448 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:50:11.791532 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:50:11.795996 systemd-logind[1519]: New session 2 of user core. Jan 30 12:50:11.806968 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 12:50:11.859484 sshd[1675]: pam_unix(sshd:session): session closed for user core Jan 30 12:50:11.867855 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:57454.service - OpenSSH per-connection server daemon (10.0.0.1:57454). Jan 30 12:50:11.868253 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:57448.service: Deactivated successfully. Jan 30 12:50:11.870192 systemd-logind[1519]: Session 2 logged out. Waiting for processes to exit. Jan 30 12:50:11.870791 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 12:50:11.872450 systemd-logind[1519]: Removed session 2. Jan 30 12:50:11.899921 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 57454 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:50:11.901247 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:50:11.905144 systemd-logind[1519]: New session 3 of user core. Jan 30 12:50:11.917813 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 12:50:11.965964 sshd[1680]: pam_unix(sshd:session): session closed for user core Jan 30 12:50:11.973837 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:57470.service - OpenSSH per-connection server daemon (10.0.0.1:57470). Jan 30 12:50:11.974239 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:57454.service: Deactivated successfully. Jan 30 12:50:11.976889 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 12:50:11.977380 systemd-logind[1519]: Session 3 logged out. Waiting for processes to exit. Jan 30 12:50:11.978432 systemd-logind[1519]: Removed session 3. Jan 30 12:50:12.003147 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 57470 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:50:12.004366 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:50:12.008035 systemd-logind[1519]: New session 4 of user core. Jan 30 12:50:12.017846 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 12:50:12.070110 sshd[1688]: pam_unix(sshd:session): session closed for user core Jan 30 12:50:12.078848 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:57478.service - OpenSSH per-connection server daemon (10.0.0.1:57478). Jan 30 12:50:12.079227 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:57470.service: Deactivated successfully. Jan 30 12:50:12.081052 systemd-logind[1519]: Session 4 logged out. Waiting for processes to exit. Jan 30 12:50:12.081605 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 12:50:12.083062 systemd-logind[1519]: Removed session 4. Jan 30 12:50:12.111462 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 57478 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:50:12.112852 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:50:12.118026 systemd-logind[1519]: New session 5 of user core. Jan 30 12:50:12.135926 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 12:50:12.199635 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 12:50:12.199925 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:50:12.217542 sudo[1703]: pam_unix(sudo:session): session closed for user root Jan 30 12:50:12.219682 sshd[1696]: pam_unix(sshd:session): session closed for user core Jan 30 12:50:12.227874 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:57494.service - OpenSSH per-connection server daemon (10.0.0.1:57494). Jan 30 12:50:12.228277 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:57478.service: Deactivated successfully. Jan 30 12:50:12.230187 systemd-logind[1519]: Session 5 logged out. Waiting for processes to exit. Jan 30 12:50:12.230714 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 12:50:12.232291 systemd-logind[1519]: Removed session 5. Jan 30 12:50:12.258853 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 57494 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:50:12.260260 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:50:12.264339 systemd-logind[1519]: New session 6 of user core. Jan 30 12:50:12.284941 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 12:50:12.338917 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 12:50:12.339196 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:50:12.342433 sudo[1713]: pam_unix(sudo:session): session closed for user root Jan 30 12:50:12.347533 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 12:50:12.347848 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:50:12.371243 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 12:50:12.372452 auditctl[1716]: No rules Jan 30 12:50:12.373316 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 12:50:12.373611 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 12:50:12.375723 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 12:50:12.399528 augenrules[1735]: No rules Jan 30 12:50:12.400207 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 12:50:12.401779 sudo[1712]: pam_unix(sudo:session): session closed for user root Jan 30 12:50:12.403537 sshd[1705]: pam_unix(sshd:session): session closed for user core Jan 30 12:50:12.420151 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:57498.service - OpenSSH per-connection server daemon (10.0.0.1:57498). Jan 30 12:50:12.421034 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:57494.service: Deactivated successfully. Jan 30 12:50:12.423474 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 12:50:12.424534 systemd-logind[1519]: Session 6 logged out. Waiting for processes to exit. Jan 30 12:50:12.425554 systemd-logind[1519]: Removed session 6. Jan 30 12:50:12.449623 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 57498 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:50:12.450995 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:50:12.455718 systemd-logind[1519]: New session 7 of user core. Jan 30 12:50:12.466862 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 12:50:12.518265 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 12:50:12.518545 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:50:12.831829 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 12:50:12.832116 (dockerd)[1766]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 12:50:13.139403 dockerd[1766]: time="2025-01-30T12:50:13.139264015Z" level=info msg="Starting up" Jan 30 12:50:13.452775 dockerd[1766]: time="2025-01-30T12:50:13.452735255Z" level=info msg="Loading containers: start." Jan 30 12:50:13.543594 kernel: Initializing XFRM netlink socket Jan 30 12:50:13.608330 systemd-networkd[1228]: docker0: Link UP Jan 30 12:50:13.629989 dockerd[1766]: time="2025-01-30T12:50:13.629944495Z" level=info msg="Loading containers: done." Jan 30 12:50:13.641420 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1989248372-merged.mount: Deactivated successfully. Jan 30 12:50:13.644226 dockerd[1766]: time="2025-01-30T12:50:13.644171935Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 12:50:13.644336 dockerd[1766]: time="2025-01-30T12:50:13.644283015Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 12:50:13.644421 dockerd[1766]: time="2025-01-30T12:50:13.644394655Z" level=info msg="Daemon has completed initialization" Jan 30 12:50:13.677692 dockerd[1766]: time="2025-01-30T12:50:13.677559135Z" level=info msg="API listen on /run/docker.sock" Jan 30 12:50:13.677938 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 12:50:14.534745 containerd[1542]: time="2025-01-30T12:50:14.534699655Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 12:50:15.241353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2273780974.mount: Deactivated successfully. Jan 30 12:50:16.063755 containerd[1542]: time="2025-01-30T12:50:16.063543815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:16.064674 containerd[1542]: time="2025-01-30T12:50:16.064334655Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937" Jan 30 12:50:16.065541 containerd[1542]: time="2025-01-30T12:50:16.065499135Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:16.068641 containerd[1542]: time="2025-01-30T12:50:16.068602055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:16.069897 containerd[1542]: time="2025-01-30T12:50:16.069847455Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 1.5351034s" Jan 30 12:50:16.069947 containerd[1542]: time="2025-01-30T12:50:16.069899295Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 12:50:16.090051 containerd[1542]: time="2025-01-30T12:50:16.090006615Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 12:50:17.277314 containerd[1542]: time="2025-01-30T12:50:17.277106655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:17.278335 containerd[1542]: time="2025-01-30T12:50:17.278059415Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563" Jan 30 12:50:17.279172 containerd[1542]: time="2025-01-30T12:50:17.279112935Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:17.282096 containerd[1542]: time="2025-01-30T12:50:17.282040895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:17.283412 containerd[1542]: time="2025-01-30T12:50:17.283377655Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.19332404s" Jan 30 12:50:17.283493 containerd[1542]: time="2025-01-30T12:50:17.283417135Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 12:50:17.303717 containerd[1542]: time="2025-01-30T12:50:17.303677655Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 12:50:17.649750 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 12:50:17.659762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:50:17.750839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:50:17.755322 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:50:17.799118 kubelet[2004]: E0130 12:50:17.799018 2004 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:50:17.802104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:50:17.802300 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:50:18.587998 containerd[1542]: time="2025-01-30T12:50:18.587542655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:18.589767 containerd[1542]: time="2025-01-30T12:50:18.589733655Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340" Jan 30 12:50:18.599308 containerd[1542]: time="2025-01-30T12:50:18.599275975Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:18.603621 containerd[1542]: time="2025-01-30T12:50:18.603583295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:18.604745 containerd[1542]: time="2025-01-30T12:50:18.604660295Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.300938s" Jan 30 12:50:18.604745 containerd[1542]: time="2025-01-30T12:50:18.604700055Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 12:50:18.628514 containerd[1542]: time="2025-01-30T12:50:18.628478135Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 12:50:20.657018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914854947.mount: Deactivated successfully. Jan 30 12:50:21.093091 containerd[1542]: time="2025-01-30T12:50:21.092937415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:21.093513 containerd[1542]: time="2025-01-30T12:50:21.093379895Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 30 12:50:21.094159 containerd[1542]: time="2025-01-30T12:50:21.094114935Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:21.096181 containerd[1542]: time="2025-01-30T12:50:21.096134255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:21.097273 containerd[1542]: time="2025-01-30T12:50:21.097219615Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 2.46870172s" Jan 30 12:50:21.097327 containerd[1542]: time="2025-01-30T12:50:21.097271535Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 12:50:21.118706 containerd[1542]: time="2025-01-30T12:50:21.118663375Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 12:50:21.746557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3501860751.mount: Deactivated successfully. Jan 30 12:50:22.705242 containerd[1542]: time="2025-01-30T12:50:22.705179335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:22.705749 containerd[1542]: time="2025-01-30T12:50:22.705714615Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 30 12:50:22.706649 containerd[1542]: time="2025-01-30T12:50:22.706606295Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:22.709950 containerd[1542]: time="2025-01-30T12:50:22.709910135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:22.711330 containerd[1542]: time="2025-01-30T12:50:22.711186975Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.59247952s" Jan 30 12:50:22.711330 containerd[1542]: time="2025-01-30T12:50:22.711225335Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 12:50:22.730873 containerd[1542]: time="2025-01-30T12:50:22.730826935Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 12:50:23.214651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755781658.mount: Deactivated successfully. Jan 30 12:50:23.224182 containerd[1542]: time="2025-01-30T12:50:23.224123055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:23.227051 containerd[1542]: time="2025-01-30T12:50:23.227007415Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 30 12:50:23.228509 containerd[1542]: time="2025-01-30T12:50:23.228466815Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:23.232899 containerd[1542]: time="2025-01-30T12:50:23.232837535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:23.233710 containerd[1542]: time="2025-01-30T12:50:23.233545255Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 502.67592ms" Jan 30 12:50:23.233710 containerd[1542]: time="2025-01-30T12:50:23.233602775Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 12:50:23.255662 containerd[1542]: time="2025-01-30T12:50:23.255606255Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 12:50:23.819802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985345665.mount: Deactivated successfully. Jan 30 12:50:25.207986 containerd[1542]: time="2025-01-30T12:50:25.207497575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:25.208453 containerd[1542]: time="2025-01-30T12:50:25.208416895Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 30 12:50:25.208985 containerd[1542]: time="2025-01-30T12:50:25.208957975Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:25.212480 containerd[1542]: time="2025-01-30T12:50:25.212436535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:25.213868 containerd[1542]: time="2025-01-30T12:50:25.213810375Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 1.95816016s" Jan 30 12:50:25.213868 containerd[1542]: time="2025-01-30T12:50:25.213850575Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 12:50:27.899787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 12:50:27.908791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:50:28.085263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:50:28.089492 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:50:28.128370 kubelet[2229]: E0130 12:50:28.128261 2229 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:50:28.130904 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:50:28.131082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:50:31.605452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:50:31.621932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:50:31.644853 systemd[1]: Reloading requested from client PID 2247 ('systemctl') (unit session-7.scope)... Jan 30 12:50:31.644875 systemd[1]: Reloading... Jan 30 12:50:31.714621 zram_generator::config[2287]: No configuration found. Jan 30 12:50:31.834937 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:50:31.887586 systemd[1]: Reloading finished in 242 ms. Jan 30 12:50:31.935467 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:50:31.938483 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 12:50:31.938761 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:50:31.959880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:50:32.056326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:50:32.061970 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:50:32.113065 kubelet[2346]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:50:32.113065 kubelet[2346]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 12:50:32.113065 kubelet[2346]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:50:32.114157 kubelet[2346]: I0130 12:50:32.113847 2346 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:50:32.708858 kubelet[2346]: I0130 12:50:32.708809 2346 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 12:50:32.708858 kubelet[2346]: I0130 12:50:32.708841 2346 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:50:32.709056 kubelet[2346]: I0130 12:50:32.709042 2346 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 12:50:32.747661 kubelet[2346]: I0130 12:50:32.747614 2346 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:50:32.749333 kubelet[2346]: E0130 12:50:32.749121 2346 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:32.757092 kubelet[2346]: I0130 12:50:32.757057 2346 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:50:32.758399 kubelet[2346]: I0130 12:50:32.758344 2346 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:50:32.758587 kubelet[2346]: I0130 12:50:32.758401 2346 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 12:50:32.758721 kubelet[2346]: I0130 12:50:32.758694 2346 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:50:32.758721 kubelet[2346]: I0130 12:50:32.758707 2346 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 12:50:32.759058 kubelet[2346]: I0130 12:50:32.759038 2346 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:50:32.762607 kubelet[2346]: I0130 12:50:32.762581 2346 kubelet.go:400] "Attempting to sync node with API server" Jan 30 12:50:32.762651 kubelet[2346]: I0130 12:50:32.762621 2346 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:50:32.763313 kubelet[2346]: I0130 12:50:32.763059 2346 kubelet.go:312] "Adding apiserver pod source" Jan 30 12:50:32.763411 kubelet[2346]: I0130 12:50:32.763392 2346 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:50:32.763950 kubelet[2346]: W0130 12:50:32.763868 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:32.763950 kubelet[2346]: E0130 12:50:32.763927 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:32.764032 kubelet[2346]: W0130 12:50:32.763932 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:32.764032 kubelet[2346]: E0130 12:50:32.763980 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:32.764811 kubelet[2346]: I0130 12:50:32.764771 2346 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 12:50:32.765155 kubelet[2346]: I0130 12:50:32.765144 2346 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:50:32.765260 kubelet[2346]: W0130 12:50:32.765249 2346 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 12:50:32.766681 kubelet[2346]: I0130 12:50:32.766204 2346 server.go:1264] "Started kubelet" Jan 30 12:50:32.767979 kubelet[2346]: I0130 12:50:32.767731 2346 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:50:32.769612 kubelet[2346]: I0130 12:50:32.768323 2346 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:50:32.769612 kubelet[2346]: I0130 12:50:32.768681 2346 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:50:32.769612 kubelet[2346]: I0130 12:50:32.768905 2346 server.go:455] "Adding debug handlers to kubelet server" Jan 30 12:50:32.769962 kubelet[2346]: I0130 12:50:32.769934 2346 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:50:32.771672 kubelet[2346]: E0130 12:50:32.771405 2346 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f795f9de4e46f default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 12:50:32.766178415 +0000 UTC m=+0.700296921,LastTimestamp:2025-01-30 12:50:32.766178415 +0000 UTC m=+0.700296921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 12:50:32.771672 kubelet[2346]: E0130 12:50:32.771656 2346 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 12:50:32.771831 kubelet[2346]: I0130 12:50:32.771754 2346 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 12:50:32.771856 kubelet[2346]: I0130 12:50:32.771835 2346 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 12:50:32.772918 kubelet[2346]: E0130 12:50:32.772882 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Jan 30 12:50:32.773267 kubelet[2346]: I0130 12:50:32.773227 2346 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:50:32.773453 kubelet[2346]: W0130 12:50:32.773411 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:32.773526 kubelet[2346]: E0130 12:50:32.773515 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:32.774879 kubelet[2346]: I0130 12:50:32.774855 2346 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:50:32.775231 kubelet[2346]: I0130 12:50:32.775208 2346 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:50:32.775411 kubelet[2346]: E0130 12:50:32.775220 2346 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 12:50:32.777248 kubelet[2346]: I0130 12:50:32.777215 2346 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:50:32.793183 kubelet[2346]: I0130 12:50:32.793121 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:50:32.794293 kubelet[2346]: I0130 12:50:32.794267 2346 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:50:32.795478 kubelet[2346]: I0130 12:50:32.794563 2346 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 12:50:32.795478 kubelet[2346]: I0130 12:50:32.794642 2346 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 12:50:32.795478 kubelet[2346]: E0130 12:50:32.794688 2346 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 12:50:32.795478 kubelet[2346]: W0130 12:50:32.795131 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:32.795478 kubelet[2346]: E0130 12:50:32.795164 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:32.798356 kubelet[2346]: I0130 12:50:32.798336 2346 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 12:50:32.798356 kubelet[2346]: I0130 12:50:32.798352 2346 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 12:50:32.798468 kubelet[2346]: I0130 12:50:32.798370 2346 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:50:32.864873 kubelet[2346]: I0130 12:50:32.864833 2346 policy_none.go:49] "None policy: Start" Jan 30 12:50:32.865682 kubelet[2346]: I0130 12:50:32.865647 2346 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 12:50:32.865682 kubelet[2346]: I0130 12:50:32.865681 2346 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:50:32.871418 kubelet[2346]: I0130 12:50:32.870951 2346 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:50:32.871418 kubelet[2346]: I0130 12:50:32.871174 2346 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:50:32.871418 kubelet[2346]: I0130 12:50:32.871294 2346 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:50:32.872652 kubelet[2346]: E0130 12:50:32.872604 2346 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 12:50:32.873497 kubelet[2346]: I0130 12:50:32.873451 2346 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:50:32.873936 kubelet[2346]: E0130 12:50:32.873894 2346 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Jan 30 12:50:32.895179 kubelet[2346]: I0130 12:50:32.895127 2346 topology_manager.go:215] "Topology Admit Handler" podUID="26846b01f8abfc8bf823418b05436632" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 12:50:32.896415 kubelet[2346]: I0130 12:50:32.896381 2346 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 12:50:32.897197 kubelet[2346]: I0130 12:50:32.897170 2346 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 12:50:32.973738 kubelet[2346]: E0130 12:50:32.973508 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Jan 30 12:50:32.973738 kubelet[2346]: I0130 12:50:32.973552 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26846b01f8abfc8bf823418b05436632-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"26846b01f8abfc8bf823418b05436632\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:50:32.973738 kubelet[2346]: I0130 12:50:32.973614 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:50:32.973738 kubelet[2346]: I0130 12:50:32.973635 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:50:32.973738 kubelet[2346]: I0130 12:50:32.973651 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26846b01f8abfc8bf823418b05436632-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"26846b01f8abfc8bf823418b05436632\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:50:32.973935 kubelet[2346]: I0130 12:50:32.973666 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26846b01f8abfc8bf823418b05436632-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"26846b01f8abfc8bf823418b05436632\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:50:32.973935 kubelet[2346]: I0130 12:50:32.973681 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:50:32.973935 kubelet[2346]: I0130 12:50:32.973740 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:50:32.973935 kubelet[2346]: I0130 12:50:32.973767 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:50:32.973935 kubelet[2346]: I0130 12:50:32.973798 2346 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 12:50:33.075233 kubelet[2346]: I0130 12:50:33.075205 2346 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:50:33.075573 kubelet[2346]: E0130 12:50:33.075546 2346 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Jan 30 12:50:33.200408 kubelet[2346]: E0130 12:50:33.200360 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:33.201089 containerd[1542]: time="2025-01-30T12:50:33.201048295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:26846b01f8abfc8bf823418b05436632,Namespace:kube-system,Attempt:0,}" Jan 30 12:50:33.202230 kubelet[2346]: E0130 12:50:33.202205 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:33.202699 containerd[1542]: time="2025-01-30T12:50:33.202656295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 30 12:50:33.203140 kubelet[2346]: E0130 12:50:33.203122 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:33.203653 containerd[1542]: time="2025-01-30T12:50:33.203530295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 30 12:50:33.375358 kubelet[2346]: E0130 12:50:33.374930 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Jan 30 12:50:33.477662 kubelet[2346]: I0130 12:50:33.477628 2346 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:50:33.478009 kubelet[2346]: E0130 12:50:33.477969 2346 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Jan 30 12:50:33.807213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302637076.mount: Deactivated successfully. Jan 30 12:50:33.815539 containerd[1542]: time="2025-01-30T12:50:33.815474895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:50:33.816758 containerd[1542]: time="2025-01-30T12:50:33.816705455Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:50:33.817768 containerd[1542]: time="2025-01-30T12:50:33.817728095Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:50:33.818748 containerd[1542]: time="2025-01-30T12:50:33.818691215Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:50:33.819661 containerd[1542]: time="2025-01-30T12:50:33.819624055Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:50:33.821785 containerd[1542]: time="2025-01-30T12:50:33.821732215Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:50:33.822458 containerd[1542]: time="2025-01-30T12:50:33.822302175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 12:50:33.825690 containerd[1542]: time="2025-01-30T12:50:33.825648815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:50:33.827616 containerd[1542]: time="2025-01-30T12:50:33.827373175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 624.6304ms" Jan 30 12:50:33.828117 containerd[1542]: time="2025-01-30T12:50:33.828086335Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 626.94672ms" Jan 30 12:50:33.830306 containerd[1542]: time="2025-01-30T12:50:33.830236215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 626.63624ms" Jan 30 12:50:33.978097 containerd[1542]: time="2025-01-30T12:50:33.977837175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:50:33.978097 containerd[1542]: time="2025-01-30T12:50:33.977916615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:50:33.978097 containerd[1542]: time="2025-01-30T12:50:33.977928815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:50:33.978097 containerd[1542]: time="2025-01-30T12:50:33.978025055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:50:33.980249 containerd[1542]: time="2025-01-30T12:50:33.980116735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:50:33.980249 containerd[1542]: time="2025-01-30T12:50:33.980198695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:50:33.980249 containerd[1542]: time="2025-01-30T12:50:33.980220775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:50:33.980515 containerd[1542]: time="2025-01-30T12:50:33.980362495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:50:33.982304 containerd[1542]: time="2025-01-30T12:50:33.982055695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:50:33.982304 containerd[1542]: time="2025-01-30T12:50:33.982117975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:50:33.982304 containerd[1542]: time="2025-01-30T12:50:33.982131535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:50:33.982304 containerd[1542]: time="2025-01-30T12:50:33.982216495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:50:33.997977 kubelet[2346]: W0130 12:50:33.997889 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:33.997977 kubelet[2346]: E0130 12:50:33.997955 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:34.023228 kubelet[2346]: W0130 12:50:34.022951 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:34.023228 kubelet[2346]: E0130 12:50:34.023107 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:34.036582 containerd[1542]: time="2025-01-30T12:50:34.036531935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:26846b01f8abfc8bf823418b05436632,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca8cdd82f558c51ead27a68938a3764520fdff4fbd1e9f9f41ab04389516d806\"" Jan 30 12:50:34.037347 containerd[1542]: time="2025-01-30T12:50:34.037272215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"084de78c2e70c5b1671317c6e39e598839a7afeb6f58780ed5fd6c1cb56e4940\"" Jan 30 12:50:34.038524 kubelet[2346]: E0130 12:50:34.038448 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:34.038988 kubelet[2346]: E0130 12:50:34.038815 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:34.039935 containerd[1542]: time="2025-01-30T12:50:34.039883135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7f20d3374647430651c525721f616c95bb65679b8c2782f010ac2b61d28ece1\"" Jan 30 12:50:34.041108 kubelet[2346]: E0130 12:50:34.040849 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:34.043534 containerd[1542]: time="2025-01-30T12:50:34.043487895Z" level=info msg="CreateContainer within sandbox \"f7f20d3374647430651c525721f616c95bb65679b8c2782f010ac2b61d28ece1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 12:50:34.043726 containerd[1542]: time="2025-01-30T12:50:34.043514775Z" level=info msg="CreateContainer within sandbox \"084de78c2e70c5b1671317c6e39e598839a7afeb6f58780ed5fd6c1cb56e4940\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 12:50:34.043839 containerd[1542]: time="2025-01-30T12:50:34.043553695Z" level=info msg="CreateContainer within sandbox \"ca8cdd82f558c51ead27a68938a3764520fdff4fbd1e9f9f41ab04389516d806\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 12:50:34.064280 containerd[1542]: time="2025-01-30T12:50:34.063530175Z" level=info msg="CreateContainer within sandbox \"f7f20d3374647430651c525721f616c95bb65679b8c2782f010ac2b61d28ece1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"746678009995c3ebf5344105219f4b9c139ba2e58e6a741cbc8ded8bfdcb0a2a\"" Jan 30 12:50:34.064400 containerd[1542]: time="2025-01-30T12:50:34.064321255Z" level=info msg="StartContainer for \"746678009995c3ebf5344105219f4b9c139ba2e58e6a741cbc8ded8bfdcb0a2a\"" Jan 30 12:50:34.068837 containerd[1542]: time="2025-01-30T12:50:34.068778735Z" level=info msg="CreateContainer within sandbox \"ca8cdd82f558c51ead27a68938a3764520fdff4fbd1e9f9f41ab04389516d806\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd7259e7b5c1b0a08fdd1a3562652ed6dcc98b2b4f8e4020f92c163a2e2cdb94\"" Jan 30 12:50:34.069245 containerd[1542]: time="2025-01-30T12:50:34.069176815Z" level=info msg="CreateContainer within sandbox \"084de78c2e70c5b1671317c6e39e598839a7afeb6f58780ed5fd6c1cb56e4940\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ec60762f90cdb5dba006959240562198f7f3510c6135b218c3fd5fc3cf431bcc\"" Jan 30 12:50:34.069532 containerd[1542]: time="2025-01-30T12:50:34.069503855Z" level=info msg="StartContainer for \"ec60762f90cdb5dba006959240562198f7f3510c6135b218c3fd5fc3cf431bcc\"" Jan 30 12:50:34.069756 containerd[1542]: time="2025-01-30T12:50:34.069632215Z" level=info msg="StartContainer for \"dd7259e7b5c1b0a08fdd1a3562652ed6dcc98b2b4f8e4020f92c163a2e2cdb94\"" Jan 30 12:50:34.081891 kubelet[2346]: W0130 12:50:34.081777 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:34.081891 kubelet[2346]: E0130 12:50:34.081864 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:34.126881 kubelet[2346]: W0130 12:50:34.126811 2346 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:34.127702 kubelet[2346]: E0130 12:50:34.127670 2346 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Jan 30 12:50:34.161582 containerd[1542]: time="2025-01-30T12:50:34.161275215Z" level=info msg="StartContainer for \"746678009995c3ebf5344105219f4b9c139ba2e58e6a741cbc8ded8bfdcb0a2a\" returns successfully" Jan 30 12:50:34.167459 containerd[1542]: time="2025-01-30T12:50:34.167345175Z" level=info msg="StartContainer for \"dd7259e7b5c1b0a08fdd1a3562652ed6dcc98b2b4f8e4020f92c163a2e2cdb94\" returns successfully" Jan 30 12:50:34.177239 containerd[1542]: time="2025-01-30T12:50:34.172867215Z" level=info msg="StartContainer for \"ec60762f90cdb5dba006959240562198f7f3510c6135b218c3fd5fc3cf431bcc\" returns successfully" Jan 30 12:50:34.182725 kubelet[2346]: E0130 12:50:34.178119 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="1.6s" Jan 30 12:50:34.280404 kubelet[2346]: I0130 12:50:34.280301 2346 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:50:34.280804 kubelet[2346]: E0130 12:50:34.280754 2346 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Jan 30 12:50:34.809356 kubelet[2346]: E0130 12:50:34.809317 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:34.818446 kubelet[2346]: E0130 12:50:34.815063 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:34.818446 kubelet[2346]: E0130 12:50:34.818346 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:35.822352 kubelet[2346]: E0130 12:50:35.822316 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:35.882200 kubelet[2346]: I0130 12:50:35.882139 2346 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:50:36.031964 kubelet[2346]: E0130 12:50:36.031918 2346 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 12:50:36.061385 kubelet[2346]: E0130 12:50:36.061248 2346 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f795f9de4e46f default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 12:50:32.766178415 +0000 UTC m=+0.700296921,LastTimestamp:2025-01-30 12:50:32.766178415 +0000 UTC m=+0.700296921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 12:50:36.101689 kubelet[2346]: I0130 12:50:36.101372 2346 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 12:50:36.766156 kubelet[2346]: I0130 12:50:36.765883 2346 apiserver.go:52] "Watching apiserver" Jan 30 12:50:36.772674 kubelet[2346]: I0130 12:50:36.772638 2346 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 12:50:36.827942 kubelet[2346]: E0130 12:50:36.827897 2346 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 12:50:36.828392 kubelet[2346]: E0130 12:50:36.828374 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:38.561218 systemd[1]: Reloading requested from client PID 2622 ('systemctl') (unit session-7.scope)... Jan 30 12:50:38.561236 systemd[1]: Reloading... Jan 30 12:50:38.627649 zram_generator::config[2664]: No configuration found. Jan 30 12:50:38.736791 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:50:38.804441 systemd[1]: Reloading finished in 242 ms. Jan 30 12:50:38.841070 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:50:38.854487 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 12:50:38.855265 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:50:38.875046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:50:39.005532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:50:39.011602 (kubelet)[2713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:50:39.100133 kubelet[2713]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:50:39.100133 kubelet[2713]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 12:50:39.100133 kubelet[2713]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:50:39.100133 kubelet[2713]: I0130 12:50:39.100101 2713 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:50:39.105602 kubelet[2713]: I0130 12:50:39.105238 2713 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 12:50:39.105602 kubelet[2713]: I0130 12:50:39.105292 2713 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:50:39.105602 kubelet[2713]: I0130 12:50:39.105613 2713 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 12:50:39.107455 kubelet[2713]: I0130 12:50:39.107422 2713 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 12:50:39.109995 kubelet[2713]: I0130 12:50:39.109930 2713 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:50:39.120558 kubelet[2713]: I0130 12:50:39.120523 2713 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:50:39.121143 kubelet[2713]: I0130 12:50:39.121056 2713 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:50:39.121306 kubelet[2713]: I0130 12:50:39.121101 2713 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 12:50:39.121402 kubelet[2713]: I0130 12:50:39.121312 2713 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:50:39.121402 kubelet[2713]: I0130 12:50:39.121322 2713 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 12:50:39.121402 kubelet[2713]: I0130 12:50:39.121365 2713 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:50:39.121500 kubelet[2713]: I0130 12:50:39.121472 2713 kubelet.go:400] "Attempting to sync node with API server" Jan 30 12:50:39.121500 kubelet[2713]: I0130 12:50:39.121484 2713 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:50:39.121538 kubelet[2713]: I0130 12:50:39.121514 2713 kubelet.go:312] "Adding apiserver pod source" Jan 30 12:50:39.121538 kubelet[2713]: I0130 12:50:39.121527 2713 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:50:39.125857 kubelet[2713]: I0130 12:50:39.125714 2713 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 12:50:39.125999 kubelet[2713]: I0130 12:50:39.125956 2713 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:50:39.127496 kubelet[2713]: I0130 12:50:39.126499 2713 server.go:1264] "Started kubelet" Jan 30 12:50:39.127496 kubelet[2713]: I0130 12:50:39.126800 2713 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:50:39.127496 kubelet[2713]: I0130 12:50:39.127001 2713 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:50:39.127496 kubelet[2713]: I0130 12:50:39.127209 2713 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:50:39.133596 kubelet[2713]: I0130 12:50:39.133529 2713 server.go:455] "Adding debug handlers to kubelet server" Jan 30 12:50:39.133934 kubelet[2713]: I0130 12:50:39.133914 2713 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:50:39.139853 kubelet[2713]: I0130 12:50:39.139821 2713 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 12:50:39.139972 kubelet[2713]: I0130 12:50:39.139922 2713 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 12:50:39.140633 kubelet[2713]: I0130 12:50:39.140077 2713 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:50:39.142746 kubelet[2713]: I0130 12:50:39.142714 2713 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:50:39.142863 kubelet[2713]: I0130 12:50:39.142836 2713 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:50:39.143408 kubelet[2713]: E0130 12:50:39.143383 2713 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 12:50:39.145428 kubelet[2713]: I0130 12:50:39.144221 2713 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:50:39.159101 kubelet[2713]: I0130 12:50:39.159038 2713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:50:39.165755 kubelet[2713]: I0130 12:50:39.165706 2713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:50:39.165755 kubelet[2713]: I0130 12:50:39.165757 2713 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 12:50:39.165957 kubelet[2713]: I0130 12:50:39.165786 2713 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 12:50:39.165957 kubelet[2713]: E0130 12:50:39.165840 2713 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 12:50:39.197380 kubelet[2713]: I0130 12:50:39.197342 2713 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 12:50:39.197877 kubelet[2713]: I0130 12:50:39.197536 2713 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 12:50:39.197877 kubelet[2713]: I0130 12:50:39.197561 2713 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:50:39.197877 kubelet[2713]: I0130 12:50:39.197744 2713 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 12:50:39.197877 kubelet[2713]: I0130 12:50:39.197757 2713 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 12:50:39.197877 kubelet[2713]: I0130 12:50:39.197788 2713 policy_none.go:49] "None policy: Start" Jan 30 12:50:39.198506 kubelet[2713]: I0130 12:50:39.198490 2713 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 12:50:39.198637 kubelet[2713]: I0130 12:50:39.198627 2713 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:50:39.198885 kubelet[2713]: I0130 12:50:39.198870 2713 state_mem.go:75] "Updated machine memory state" Jan 30 12:50:39.201080 kubelet[2713]: I0130 12:50:39.201055 2713 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:50:39.201437 kubelet[2713]: I0130 12:50:39.201395 2713 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:50:39.201732 kubelet[2713]: I0130 12:50:39.201706 2713 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:50:39.266682 kubelet[2713]: I0130 12:50:39.266614 2713 topology_manager.go:215] "Topology Admit Handler" podUID="26846b01f8abfc8bf823418b05436632" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 12:50:39.266803 kubelet[2713]: I0130 12:50:39.266779 2713 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 12:50:39.266848 kubelet[2713]: I0130 12:50:39.266819 2713 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 12:50:39.307209 kubelet[2713]: I0130 12:50:39.307173 2713 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:50:39.316673 kubelet[2713]: I0130 12:50:39.316381 2713 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 30 12:50:39.316673 kubelet[2713]: I0130 12:50:39.316484 2713 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 12:50:39.341940 kubelet[2713]: I0130 12:50:39.341898 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26846b01f8abfc8bf823418b05436632-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"26846b01f8abfc8bf823418b05436632\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:50:39.341940 kubelet[2713]: I0130 12:50:39.341942 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26846b01f8abfc8bf823418b05436632-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"26846b01f8abfc8bf823418b05436632\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:50:39.342199 kubelet[2713]: I0130 12:50:39.341968 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26846b01f8abfc8bf823418b05436632-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"26846b01f8abfc8bf823418b05436632\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:50:39.342199 kubelet[2713]: I0130 12:50:39.341994 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:50:39.342199 kubelet[2713]: I0130 12:50:39.342010 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:50:39.342199 kubelet[2713]: I0130 12:50:39.342027 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:50:39.342199 kubelet[2713]: I0130 12:50:39.342042 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:50:39.342336 kubelet[2713]: I0130 12:50:39.342059 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:50:39.342336 kubelet[2713]: I0130 12:50:39.342073 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 12:50:39.565375 sudo[2748]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 12:50:39.565729 sudo[2748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 12:50:39.585035 kubelet[2713]: E0130 12:50:39.584668 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:39.589647 kubelet[2713]: E0130 12:50:39.589613 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:39.589839 kubelet[2713]: E0130 12:50:39.589693 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:40.039421 sudo[2748]: pam_unix(sudo:session): session closed for user root Jan 30 12:50:40.127929 kubelet[2713]: I0130 12:50:40.127874 2713 apiserver.go:52] "Watching apiserver" Jan 30 12:50:40.140527 kubelet[2713]: I0130 12:50:40.140480 2713 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 12:50:40.179723 kubelet[2713]: E0130 12:50:40.179676 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:40.182393 kubelet[2713]: E0130 12:50:40.182064 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:40.191737 kubelet[2713]: I0130 12:50:40.191255 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.188014038 podStartE2EDuration="1.188014038s" podCreationTimestamp="2025-01-30 12:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:50:40.187896356 +0000 UTC m=+1.172051137" watchObservedRunningTime="2025-01-30 12:50:40.188014038 +0000 UTC m=+1.172168819" Jan 30 12:50:40.192773 kubelet[2713]: E0130 12:50:40.192710 2713 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 12:50:40.194603 kubelet[2713]: E0130 12:50:40.193220 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:40.216379 kubelet[2713]: I0130 12:50:40.216310 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.216290122 podStartE2EDuration="1.216290122s" podCreationTimestamp="2025-01-30 12:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:50:40.21605004 +0000 UTC m=+1.200204821" watchObservedRunningTime="2025-01-30 12:50:40.216290122 +0000 UTC m=+1.200444903" Jan 30 12:50:40.217640 kubelet[2713]: I0130 12:50:40.216511 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.216505045 podStartE2EDuration="1.216505045s" podCreationTimestamp="2025-01-30 12:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:50:40.201410152 +0000 UTC m=+1.185564933" watchObservedRunningTime="2025-01-30 12:50:40.216505045 +0000 UTC m=+1.200659826" Jan 30 12:50:41.181488 kubelet[2713]: E0130 12:50:41.181431 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:41.853711 sudo[1748]: pam_unix(sudo:session): session closed for user root Jan 30 12:50:41.858180 sshd[1741]: pam_unix(sshd:session): session closed for user core Jan 30 12:50:41.861540 systemd-logind[1519]: Session 7 logged out. Waiting for processes to exit. Jan 30 12:50:41.861805 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:57498.service: Deactivated successfully. Jan 30 12:50:41.864335 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 12:50:41.865377 systemd-logind[1519]: Removed session 7. Jan 30 12:50:42.552973 kubelet[2713]: E0130 12:50:42.552858 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:43.292223 kubelet[2713]: E0130 12:50:43.292164 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:49.217327 kubelet[2713]: E0130 12:50:49.214277 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:50.197413 kubelet[2713]: E0130 12:50:50.196717 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:50.581139 update_engine[1526]: I20250130 12:50:50.580977 1526 update_attempter.cc:509] Updating boot flags... Jan 30 12:50:50.612678 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2800) Jan 30 12:50:50.644596 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2799) Jan 30 12:50:50.675882 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2799) Jan 30 12:50:52.574954 kubelet[2713]: E0130 12:50:52.574920 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:52.620178 kubelet[2713]: I0130 12:50:52.620148 2713 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 12:50:52.620760 containerd[1542]: time="2025-01-30T12:50:52.620720911Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 12:50:52.621078 kubelet[2713]: I0130 12:50:52.621000 2713 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 12:50:53.006197 kubelet[2713]: I0130 12:50:53.006147 2713 topology_manager.go:215] "Topology Admit Handler" podUID="c7577378-ca80-4674-bfec-17affbf03915" podNamespace="kube-system" podName="kube-proxy-2f4xp" Jan 30 12:50:53.034953 kubelet[2713]: I0130 12:50:53.034909 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwgfb\" (UniqueName: \"kubernetes.io/projected/c7577378-ca80-4674-bfec-17affbf03915-kube-api-access-dwgfb\") pod \"kube-proxy-2f4xp\" (UID: \"c7577378-ca80-4674-bfec-17affbf03915\") " pod="kube-system/kube-proxy-2f4xp" Jan 30 12:50:53.035095 kubelet[2713]: I0130 12:50:53.034962 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c7577378-ca80-4674-bfec-17affbf03915-kube-proxy\") pod \"kube-proxy-2f4xp\" (UID: \"c7577378-ca80-4674-bfec-17affbf03915\") " pod="kube-system/kube-proxy-2f4xp" Jan 30 12:50:53.035095 kubelet[2713]: I0130 12:50:53.034997 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7577378-ca80-4674-bfec-17affbf03915-xtables-lock\") pod \"kube-proxy-2f4xp\" (UID: \"c7577378-ca80-4674-bfec-17affbf03915\") " pod="kube-system/kube-proxy-2f4xp" Jan 30 12:50:53.035095 kubelet[2713]: I0130 12:50:53.035016 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7577378-ca80-4674-bfec-17affbf03915-lib-modules\") pod \"kube-proxy-2f4xp\" (UID: \"c7577378-ca80-4674-bfec-17affbf03915\") " pod="kube-system/kube-proxy-2f4xp" Jan 30 12:50:53.048143 kubelet[2713]: I0130 12:50:53.048089 2713 topology_manager.go:215] "Topology Admit Handler" podUID="0891400a-e1eb-48b8-b3ae-114768d1daf2" podNamespace="kube-system" podName="cilium-6hmdv" Jan 30 12:50:53.136112 kubelet[2713]: I0130 12:50:53.135986 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-bpf-maps\") pod \"cilium-6hmdv\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " pod="kube-system/cilium-6hmdv" Jan 30 12:50:53.136441 kubelet[2713]: I0130 12:50:53.136171 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0891400a-e1eb-48b8-b3ae-114768d1daf2-hubble-tls\") pod \"cilium-6hmdv\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " pod="kube-system/cilium-6hmdv" Jan 30 12:50:53.136859 kubelet[2713]: I0130 12:50:53.136755 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0891400a-e1eb-48b8-b3ae-114768d1daf2-cilium-config-path\") pod \"cilium-6hmdv\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " pod="kube-system/cilium-6hmdv" Jan 30 12:50:53.136859 kubelet[2713]: I0130 12:50:53.136805 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-cilium-cgroup\") pod \"cilium-6hmdv\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " pod="kube-system/cilium-6hmdv" Jan 30 12:50:53.136859 kubelet[2713]: I0130 12:50:53.136824 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-cni-path\") pod \"cilium-6hmdv\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " pod="kube-system/cilium-6hmdv" Jan 30 12:50:53.136859 kubelet[2713]: I0130 12:50:53.136843 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-host-proc-sys-net\") pod \"cilium-6hmdv\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " pod="kube-system/cilium-6hmdv" Jan 30 12:50:53.138434 kubelet[2713]: I0130 12:50:53.138404 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-hostproc\") pod \"cilium-6hmdv\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " pod="kube-system/cilium-6hmdv" Jan 30 12:50:53.138476 kubelet[2713]: I0130 12:50:53.138453 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-host-proc-sys-kernel\") pod \"cilium-6hmdv\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " pod="kube-system/cilium-6hmdv" Jan 30 12:50:53.138476 kubelet[2713]: I0130 12:50:53.138472 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-xtables-lock\") pod \"cilium-6hmdv\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " pod="kube-system/cilium-6hmdv" Jan 30 12:50:53.138530 kubelet[2713]: I0130 12:50:53.138491 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j4zf\" (UniqueName: \"kubernetes.io/projected/0891400a-e1eb-48b8-b3ae-114768d1daf2-kube-api-access-9j4zf\") pod \"cilium-6hmdv\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " pod="kube-system/cilium-6hmdv" Jan 30 12:50:53.138719 kubelet[2713]: I0130 12:50:53.138511 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-lib-modules\") pod \"cilium-6hmdv\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " pod="kube-system/cilium-6hmdv" Jan 30 12:50:53.138719 kubelet[2713]: I0130 12:50:53.138593 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0891400a-e1eb-48b8-b3ae-114768d1daf2-clustermesh-secrets\") pod \"cilium-6hmdv\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " pod="kube-system/cilium-6hmdv" Jan 30 12:50:53.138719 kubelet[2713]: I0130 12:50:53.138610 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-cilium-run\") pod \"cilium-6hmdv\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " pod="kube-system/cilium-6hmdv" Jan 30 12:50:53.138719 kubelet[2713]: I0130 12:50:53.138652 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-etc-cni-netd\") pod \"cilium-6hmdv\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " pod="kube-system/cilium-6hmdv" Jan 30 12:50:53.254879 kubelet[2713]: I0130 12:50:53.246350 2713 topology_manager.go:215] "Topology Admit Handler" podUID="0652b5a0-a464-4ea0-9506-ebf8d523baa8" podNamespace="kube-system" podName="cilium-operator-599987898-nkqcv" Jan 30 12:50:53.300662 kubelet[2713]: E0130 12:50:53.300497 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:53.314448 kubelet[2713]: E0130 12:50:53.314031 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:53.323732 containerd[1542]: time="2025-01-30T12:50:53.323549805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2f4xp,Uid:c7577378-ca80-4674-bfec-17affbf03915,Namespace:kube-system,Attempt:0,}" Jan 30 12:50:53.340222 kubelet[2713]: I0130 12:50:53.340125 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0652b5a0-a464-4ea0-9506-ebf8d523baa8-cilium-config-path\") pod \"cilium-operator-599987898-nkqcv\" (UID: \"0652b5a0-a464-4ea0-9506-ebf8d523baa8\") " pod="kube-system/cilium-operator-599987898-nkqcv" Jan 30 12:50:53.340222 kubelet[2713]: I0130 12:50:53.340169 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbq9k\" (UniqueName: \"kubernetes.io/projected/0652b5a0-a464-4ea0-9506-ebf8d523baa8-kube-api-access-pbq9k\") pod \"cilium-operator-599987898-nkqcv\" (UID: \"0652b5a0-a464-4ea0-9506-ebf8d523baa8\") " pod="kube-system/cilium-operator-599987898-nkqcv" Jan 30 12:50:53.349336 containerd[1542]: time="2025-01-30T12:50:53.349024331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:50:53.349336 containerd[1542]: time="2025-01-30T12:50:53.349076492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:50:53.349336 containerd[1542]: time="2025-01-30T12:50:53.349087212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:50:53.349336 containerd[1542]: time="2025-01-30T12:50:53.349172932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:50:53.359874 kubelet[2713]: E0130 12:50:53.359804 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:53.361782 containerd[1542]: time="2025-01-30T12:50:53.360314307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6hmdv,Uid:0891400a-e1eb-48b8-b3ae-114768d1daf2,Namespace:kube-system,Attempt:0,}" Jan 30 12:50:53.390106 containerd[1542]: time="2025-01-30T12:50:53.390068295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2f4xp,Uid:c7577378-ca80-4674-bfec-17affbf03915,Namespace:kube-system,Attempt:0,} returns sandbox id \"14fd13c9389881f795defa870321a1f95a649f260a29fb6dbd84a3c1e18f8e8d\"" Jan 30 12:50:53.390873 kubelet[2713]: E0130 12:50:53.390846 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:53.396795 containerd[1542]: time="2025-01-30T12:50:53.395910604Z" level=info msg="CreateContainer within sandbox \"14fd13c9389881f795defa870321a1f95a649f260a29fb6dbd84a3c1e18f8e8d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 12:50:53.398102 containerd[1542]: time="2025-01-30T12:50:53.397985174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:50:53.398102 containerd[1542]: time="2025-01-30T12:50:53.398046455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:50:53.398102 containerd[1542]: time="2025-01-30T12:50:53.398061735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:50:53.398280 containerd[1542]: time="2025-01-30T12:50:53.398154015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:50:53.418972 containerd[1542]: time="2025-01-30T12:50:53.418418876Z" level=info msg="CreateContainer within sandbox \"14fd13c9389881f795defa870321a1f95a649f260a29fb6dbd84a3c1e18f8e8d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d50536c01446c4b30ea747acdfbe1b3b05bca209f6789a70ddfc8b0e9db94aab\"" Jan 30 12:50:53.420679 containerd[1542]: time="2025-01-30T12:50:53.419633722Z" level=info msg="StartContainer for \"d50536c01446c4b30ea747acdfbe1b3b05bca209f6789a70ddfc8b0e9db94aab\"" Jan 30 12:50:53.443806 containerd[1542]: time="2025-01-30T12:50:53.443685161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6hmdv,Uid:0891400a-e1eb-48b8-b3ae-114768d1daf2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43\"" Jan 30 12:50:53.448193 kubelet[2713]: E0130 12:50:53.447632 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:53.452439 containerd[1542]: time="2025-01-30T12:50:53.450041713Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 12:50:53.483998 containerd[1542]: time="2025-01-30T12:50:53.483951361Z" level=info msg="StartContainer for \"d50536c01446c4b30ea747acdfbe1b3b05bca209f6789a70ddfc8b0e9db94aab\" returns successfully" Jan 30 12:50:53.556460 kubelet[2713]: E0130 12:50:53.556331 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:53.557334 containerd[1542]: time="2025-01-30T12:50:53.557230165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-nkqcv,Uid:0652b5a0-a464-4ea0-9506-ebf8d523baa8,Namespace:kube-system,Attempt:0,}" Jan 30 12:50:53.588695 containerd[1542]: time="2025-01-30T12:50:53.588555600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:50:53.588695 containerd[1542]: time="2025-01-30T12:50:53.588654761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:50:53.588695 containerd[1542]: time="2025-01-30T12:50:53.588682001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:50:53.588958 containerd[1542]: time="2025-01-30T12:50:53.588858522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:50:53.643325 containerd[1542]: time="2025-01-30T12:50:53.643268552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-nkqcv,Uid:0652b5a0-a464-4ea0-9506-ebf8d523baa8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d09bdffc1d3d3656a03b234259c283957ec94f6b1e9af512885d14077ca3262f\"" Jan 30 12:50:53.644036 kubelet[2713]: E0130 12:50:53.644001 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:54.212836 kubelet[2713]: E0130 12:50:54.212777 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:56.402370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount687140839.mount: Deactivated successfully. Jan 30 12:50:57.931587 containerd[1542]: time="2025-01-30T12:50:57.931529200Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:57.932778 containerd[1542]: time="2025-01-30T12:50:57.932586524Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 12:50:57.936052 containerd[1542]: time="2025-01-30T12:50:57.935984297Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:57.937790 containerd[1542]: time="2025-01-30T12:50:57.937658144Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.487565831s" Jan 30 12:50:57.937790 containerd[1542]: time="2025-01-30T12:50:57.937700384Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 12:50:57.940327 containerd[1542]: time="2025-01-30T12:50:57.940294834Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 12:50:57.945841 containerd[1542]: time="2025-01-30T12:50:57.945795335Z" level=info msg="CreateContainer within sandbox \"e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 12:50:57.972428 containerd[1542]: time="2025-01-30T12:50:57.972358117Z" level=info msg="CreateContainer within sandbox \"e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1\"" Jan 30 12:50:57.973870 containerd[1542]: time="2025-01-30T12:50:57.973172200Z" level=info msg="StartContainer for \"a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1\"" Jan 30 12:50:58.033025 containerd[1542]: time="2025-01-30T12:50:58.032982382Z" level=info msg="StartContainer for \"a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1\" returns successfully" Jan 30 12:50:58.114860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1-rootfs.mount: Deactivated successfully. Jan 30 12:50:58.223342 kubelet[2713]: E0130 12:50:58.223187 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:58.345286 containerd[1542]: time="2025-01-30T12:50:58.339729404Z" level=info msg="shim disconnected" id=a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1 namespace=k8s.io Jan 30 12:50:58.345286 containerd[1542]: time="2025-01-30T12:50:58.344079020Z" level=warning msg="cleaning up after shim disconnected" id=a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1 namespace=k8s.io Jan 30 12:50:58.345286 containerd[1542]: time="2025-01-30T12:50:58.344097380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:50:58.362015 kubelet[2713]: I0130 12:50:58.361901 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2f4xp" podStartSLOduration=6.361882724 podStartE2EDuration="6.361882724s" podCreationTimestamp="2025-01-30 12:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:50:54.231123518 +0000 UTC m=+15.215278299" watchObservedRunningTime="2025-01-30 12:50:58.361882724 +0000 UTC m=+19.346037505" Jan 30 12:50:59.226489 kubelet[2713]: E0130 12:50:59.226334 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:50:59.230630 containerd[1542]: time="2025-01-30T12:50:59.229505191Z" level=info msg="CreateContainer within sandbox \"e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 12:50:59.328688 containerd[1542]: time="2025-01-30T12:50:59.328221324Z" level=info msg="CreateContainer within sandbox \"e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3\"" Jan 30 12:50:59.330641 containerd[1542]: time="2025-01-30T12:50:59.329116727Z" level=info msg="StartContainer for \"1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3\"" Jan 30 12:50:59.386577 containerd[1542]: time="2025-01-30T12:50:59.386518360Z" level=info msg="StartContainer for \"1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3\" returns successfully" Jan 30 12:50:59.463940 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:50:59.464227 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:50:59.464320 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:50:59.476133 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:50:59.507281 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:50:59.523332 containerd[1542]: time="2025-01-30T12:50:59.523261901Z" level=info msg="shim disconnected" id=1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3 namespace=k8s.io Jan 30 12:50:59.523332 containerd[1542]: time="2025-01-30T12:50:59.523324781Z" level=warning msg="cleaning up after shim disconnected" id=1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3 namespace=k8s.io Jan 30 12:50:59.523332 containerd[1542]: time="2025-01-30T12:50:59.523334021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:50:59.724016 containerd[1542]: time="2025-01-30T12:50:59.723957617Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:59.724594 containerd[1542]: time="2025-01-30T12:50:59.724548259Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 12:50:59.725406 containerd[1542]: time="2025-01-30T12:50:59.725368182Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:50:59.726879 containerd[1542]: time="2025-01-30T12:50:59.726834747Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.786498633s" Jan 30 12:50:59.726951 containerd[1542]: time="2025-01-30T12:50:59.726880587Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 12:50:59.729360 containerd[1542]: time="2025-01-30T12:50:59.729238955Z" level=info msg="CreateContainer within sandbox \"d09bdffc1d3d3656a03b234259c283957ec94f6b1e9af512885d14077ca3262f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 12:50:59.745762 containerd[1542]: time="2025-01-30T12:50:59.745689811Z" level=info msg="CreateContainer within sandbox \"d09bdffc1d3d3656a03b234259c283957ec94f6b1e9af512885d14077ca3262f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84\"" Jan 30 12:50:59.746413 containerd[1542]: time="2025-01-30T12:50:59.746208972Z" level=info msg="StartContainer for \"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84\"" Jan 30 12:50:59.828951 containerd[1542]: time="2025-01-30T12:50:59.828749211Z" level=info msg="StartContainer for \"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84\" returns successfully" Jan 30 12:50:59.969134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3-rootfs.mount: Deactivated successfully. Jan 30 12:51:00.231144 kubelet[2713]: E0130 12:51:00.230799 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:00.233363 kubelet[2713]: E0130 12:51:00.232797 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:00.238871 containerd[1542]: time="2025-01-30T12:51:00.238787702Z" level=info msg="CreateContainer within sandbox \"e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 12:51:00.270335 kubelet[2713]: I0130 12:51:00.270134 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-nkqcv" podStartSLOduration=1.1871664100000001 podStartE2EDuration="7.270113681s" podCreationTimestamp="2025-01-30 12:50:53 +0000 UTC" firstStartedPulling="2025-01-30 12:50:53.644774599 +0000 UTC m=+14.628929380" lastFinishedPulling="2025-01-30 12:50:59.72772187 +0000 UTC m=+20.711876651" observedRunningTime="2025-01-30 12:51:00.26979712 +0000 UTC m=+21.253951901" watchObservedRunningTime="2025-01-30 12:51:00.270113681 +0000 UTC m=+21.254268462" Jan 30 12:51:00.310664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1614842972.mount: Deactivated successfully. Jan 30 12:51:00.313629 containerd[1542]: time="2025-01-30T12:51:00.313577938Z" level=info msg="CreateContainer within sandbox \"e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629\"" Jan 30 12:51:00.314222 containerd[1542]: time="2025-01-30T12:51:00.314145820Z" level=info msg="StartContainer for \"00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629\"" Jan 30 12:51:00.412522 containerd[1542]: time="2025-01-30T12:51:00.412391091Z" level=info msg="StartContainer for \"00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629\" returns successfully" Jan 30 12:51:00.458669 containerd[1542]: time="2025-01-30T12:51:00.458552116Z" level=info msg="shim disconnected" id=00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629 namespace=k8s.io Jan 30 12:51:00.458669 containerd[1542]: time="2025-01-30T12:51:00.458660397Z" level=warning msg="cleaning up after shim disconnected" id=00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629 namespace=k8s.io Jan 30 12:51:00.458669 containerd[1542]: time="2025-01-30T12:51:00.458671157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:51:00.965424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629-rootfs.mount: Deactivated successfully. Jan 30 12:51:01.241611 kubelet[2713]: E0130 12:51:01.240599 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:01.241611 kubelet[2713]: E0130 12:51:01.241254 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:01.246176 containerd[1542]: time="2025-01-30T12:51:01.246081476Z" level=info msg="CreateContainer within sandbox \"e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 12:51:01.326224 containerd[1542]: time="2025-01-30T12:51:01.326065953Z" level=info msg="CreateContainer within sandbox \"e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168\"" Jan 30 12:51:01.327647 containerd[1542]: time="2025-01-30T12:51:01.326715995Z" level=info msg="StartContainer for \"b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168\"" Jan 30 12:51:01.402163 containerd[1542]: time="2025-01-30T12:51:01.402115098Z" level=info msg="StartContainer for \"b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168\" returns successfully" Jan 30 12:51:01.522668 containerd[1542]: time="2025-01-30T12:51:01.521843093Z" level=info msg="shim disconnected" id=b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168 namespace=k8s.io Jan 30 12:51:01.522668 containerd[1542]: time="2025-01-30T12:51:01.521903573Z" level=warning msg="cleaning up after shim disconnected" id=b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168 namespace=k8s.io Jan 30 12:51:01.522668 containerd[1542]: time="2025-01-30T12:51:01.521911933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:51:01.966266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168-rootfs.mount: Deactivated successfully. Jan 30 12:51:02.265103 kubelet[2713]: E0130 12:51:02.264992 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:02.267676 containerd[1542]: time="2025-01-30T12:51:02.267632012Z" level=info msg="CreateContainer within sandbox \"e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 12:51:02.289768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount816075398.mount: Deactivated successfully. Jan 30 12:51:02.300241 containerd[1542]: time="2025-01-30T12:51:02.300138062Z" level=info msg="CreateContainer within sandbox \"e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884\"" Jan 30 12:51:02.301699 containerd[1542]: time="2025-01-30T12:51:02.300734064Z" level=info msg="StartContainer for \"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884\"" Jan 30 12:51:02.404341 containerd[1542]: time="2025-01-30T12:51:02.404291752Z" level=info msg="StartContainer for \"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884\" returns successfully" Jan 30 12:51:02.516226 kubelet[2713]: I0130 12:51:02.516104 2713 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 12:51:02.555139 kubelet[2713]: I0130 12:51:02.555083 2713 topology_manager.go:215] "Topology Admit Handler" podUID="831c40a9-da91-40b0-90f6-d01dec7b6814" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xfgc8" Jan 30 12:51:02.564603 kubelet[2713]: I0130 12:51:02.562902 2713 topology_manager.go:215] "Topology Admit Handler" podUID="542d01cf-dac1-4b0d-ba51-bf6694fd1e5e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-znzzb" Jan 30 12:51:02.703557 kubelet[2713]: I0130 12:51:02.703510 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfdc9\" (UniqueName: \"kubernetes.io/projected/831c40a9-da91-40b0-90f6-d01dec7b6814-kube-api-access-rfdc9\") pod \"coredns-7db6d8ff4d-xfgc8\" (UID: \"831c40a9-da91-40b0-90f6-d01dec7b6814\") " pod="kube-system/coredns-7db6d8ff4d-xfgc8" Jan 30 12:51:02.703785 kubelet[2713]: I0130 12:51:02.703765 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/542d01cf-dac1-4b0d-ba51-bf6694fd1e5e-config-volume\") pod \"coredns-7db6d8ff4d-znzzb\" (UID: \"542d01cf-dac1-4b0d-ba51-bf6694fd1e5e\") " pod="kube-system/coredns-7db6d8ff4d-znzzb" Jan 30 12:51:02.703862 kubelet[2713]: I0130 12:51:02.703848 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7trz\" (UniqueName: \"kubernetes.io/projected/542d01cf-dac1-4b0d-ba51-bf6694fd1e5e-kube-api-access-m7trz\") pod \"coredns-7db6d8ff4d-znzzb\" (UID: \"542d01cf-dac1-4b0d-ba51-bf6694fd1e5e\") " pod="kube-system/coredns-7db6d8ff4d-znzzb" Jan 30 12:51:02.703950 kubelet[2713]: I0130 12:51:02.703936 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/831c40a9-da91-40b0-90f6-d01dec7b6814-config-volume\") pod \"coredns-7db6d8ff4d-xfgc8\" (UID: \"831c40a9-da91-40b0-90f6-d01dec7b6814\") " pod="kube-system/coredns-7db6d8ff4d-xfgc8" Jan 30 12:51:02.859422 kubelet[2713]: E0130 12:51:02.859264 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:02.862072 containerd[1542]: time="2025-01-30T12:51:02.862011463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xfgc8,Uid:831c40a9-da91-40b0-90f6-d01dec7b6814,Namespace:kube-system,Attempt:0,}" Jan 30 12:51:02.867536 kubelet[2713]: E0130 12:51:02.867459 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:02.868689 containerd[1542]: time="2025-01-30T12:51:02.868654841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-znzzb,Uid:542d01cf-dac1-4b0d-ba51-bf6694fd1e5e,Namespace:kube-system,Attempt:0,}" Jan 30 12:51:03.249939 kubelet[2713]: E0130 12:51:03.248486 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:03.288554 kubelet[2713]: I0130 12:51:03.288455 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6hmdv" podStartSLOduration=6.797650913 podStartE2EDuration="11.288439157s" podCreationTimestamp="2025-01-30 12:50:52 +0000 UTC" firstStartedPulling="2025-01-30 12:50:53.449331469 +0000 UTC m=+14.433486210" lastFinishedPulling="2025-01-30 12:50:57.940119673 +0000 UTC m=+18.924274454" observedRunningTime="2025-01-30 12:51:03.287234194 +0000 UTC m=+24.271388975" watchObservedRunningTime="2025-01-30 12:51:03.288439157 +0000 UTC m=+24.272593938" Jan 30 12:51:04.250510 kubelet[2713]: E0130 12:51:04.250469 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:04.707763 systemd-networkd[1228]: cilium_host: Link UP Jan 30 12:51:04.707887 systemd-networkd[1228]: cilium_net: Link UP Jan 30 12:51:04.708111 systemd-networkd[1228]: cilium_net: Gained carrier Jan 30 12:51:04.708278 systemd-networkd[1228]: cilium_host: Gained carrier Jan 30 12:51:04.708381 systemd-networkd[1228]: cilium_net: Gained IPv6LL Jan 30 12:51:04.708503 systemd-networkd[1228]: cilium_host: Gained IPv6LL Jan 30 12:51:04.829412 systemd-networkd[1228]: cilium_vxlan: Link UP Jan 30 12:51:04.829417 systemd-networkd[1228]: cilium_vxlan: Gained carrier Jan 30 12:51:05.194623 kernel: NET: Registered PF_ALG protocol family Jan 30 12:51:05.253016 kubelet[2713]: E0130 12:51:05.252754 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:05.792017 systemd-networkd[1228]: lxc_health: Link UP Jan 30 12:51:05.801729 systemd-networkd[1228]: lxc_health: Gained carrier Jan 30 12:51:06.022747 systemd-networkd[1228]: cilium_vxlan: Gained IPv6LL Jan 30 12:51:06.026670 systemd-networkd[1228]: lxcb90f53ded106: Link UP Jan 30 12:51:06.034599 kernel: eth0: renamed from tmpebe4e Jan 30 12:51:06.047752 systemd-networkd[1228]: lxcb90f53ded106: Gained carrier Jan 30 12:51:06.047981 systemd-networkd[1228]: lxcff2c0f6e594d: Link UP Jan 30 12:51:06.054871 kernel: eth0: renamed from tmpc5807 Jan 30 12:51:06.064487 systemd-networkd[1228]: lxcff2c0f6e594d: Gained carrier Jan 30 12:51:06.982855 systemd-networkd[1228]: lxc_health: Gained IPv6LL Jan 30 12:51:07.369791 kubelet[2713]: E0130 12:51:07.369640 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:07.433745 systemd-networkd[1228]: lxcb90f53ded106: Gained IPv6LL Jan 30 12:51:07.558727 systemd-networkd[1228]: lxcff2c0f6e594d: Gained IPv6LL Jan 30 12:51:08.258606 kubelet[2713]: E0130 12:51:08.258326 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:09.200878 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:33412.service - OpenSSH per-connection server daemon (10.0.0.1:33412). Jan 30 12:51:09.236830 sshd[3957]: Accepted publickey for core from 10.0.0.1 port 33412 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:09.238266 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:09.242979 systemd-logind[1519]: New session 8 of user core. Jan 30 12:51:09.256961 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 12:51:09.415725 sshd[3957]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:09.422760 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:33412.service: Deactivated successfully. Jan 30 12:51:09.426489 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 12:51:09.427644 systemd-logind[1519]: Session 8 logged out. Waiting for processes to exit. Jan 30 12:51:09.429005 systemd-logind[1519]: Removed session 8. Jan 30 12:51:09.863870 containerd[1542]: time="2025-01-30T12:51:09.863706746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:51:09.863870 containerd[1542]: time="2025-01-30T12:51:09.863801106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:51:09.863870 containerd[1542]: time="2025-01-30T12:51:09.863818106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:09.864336 containerd[1542]: time="2025-01-30T12:51:09.863958146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:09.891469 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:51:09.896180 containerd[1542]: time="2025-01-30T12:51:09.892443636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:51:09.896180 containerd[1542]: time="2025-01-30T12:51:09.892500437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:51:09.896180 containerd[1542]: time="2025-01-30T12:51:09.892511597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:09.896180 containerd[1542]: time="2025-01-30T12:51:09.892612357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:09.912326 containerd[1542]: time="2025-01-30T12:51:09.912276472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xfgc8,Uid:831c40a9-da91-40b0-90f6-d01dec7b6814,Namespace:kube-system,Attempt:0,} returns sandbox id \"c58079db300e2cb6d6b58e0e078bd96557ac0fcdb7c128980224c7fc8f563d23\"" Jan 30 12:51:09.913096 kubelet[2713]: E0130 12:51:09.913073 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:09.920842 containerd[1542]: time="2025-01-30T12:51:09.920796247Z" level=info msg="CreateContainer within sandbox \"c58079db300e2cb6d6b58e0e078bd96557ac0fcdb7c128980224c7fc8f563d23\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 12:51:09.926008 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:51:09.945041 containerd[1542]: time="2025-01-30T12:51:09.944996689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-znzzb,Uid:542d01cf-dac1-4b0d-ba51-bf6694fd1e5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebe4e66448743218de8cacac6dbdf1bfb26f1edb482dc5eee10e0d6eeea8fa27\"" Jan 30 12:51:09.946408 kubelet[2713]: E0130 12:51:09.946121 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:09.948908 containerd[1542]: time="2025-01-30T12:51:09.948848856Z" level=info msg="CreateContainer within sandbox \"c58079db300e2cb6d6b58e0e078bd96557ac0fcdb7c128980224c7fc8f563d23\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b19f8b0772822abc0216e5f98e441be331b4421052ad9fadefa9d938851fb16d\"" Jan 30 12:51:09.949197 containerd[1542]: time="2025-01-30T12:51:09.949161777Z" level=info msg="CreateContainer within sandbox \"ebe4e66448743218de8cacac6dbdf1bfb26f1edb482dc5eee10e0d6eeea8fa27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 12:51:09.949853 containerd[1542]: time="2025-01-30T12:51:09.949725258Z" level=info msg="StartContainer for \"b19f8b0772822abc0216e5f98e441be331b4421052ad9fadefa9d938851fb16d\"" Jan 30 12:51:09.966409 containerd[1542]: time="2025-01-30T12:51:09.966358647Z" level=info msg="CreateContainer within sandbox \"ebe4e66448743218de8cacac6dbdf1bfb26f1edb482dc5eee10e0d6eeea8fa27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"08e0d2776d885da516a8685827c6f6cf1157e2defe9ef1387f82c919e8fb10bf\"" Jan 30 12:51:09.968040 containerd[1542]: time="2025-01-30T12:51:09.967112968Z" level=info msg="StartContainer for \"08e0d2776d885da516a8685827c6f6cf1157e2defe9ef1387f82c919e8fb10bf\"" Jan 30 12:51:10.008507 containerd[1542]: time="2025-01-30T12:51:10.008461721Z" level=info msg="StartContainer for \"b19f8b0772822abc0216e5f98e441be331b4421052ad9fadefa9d938851fb16d\" returns successfully" Jan 30 12:51:10.029216 containerd[1542]: time="2025-01-30T12:51:10.029162755Z" level=info msg="StartContainer for \"08e0d2776d885da516a8685827c6f6cf1157e2defe9ef1387f82c919e8fb10bf\" returns successfully" Jan 30 12:51:10.264597 kubelet[2713]: E0130 12:51:10.264425 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:10.267023 kubelet[2713]: E0130 12:51:10.266997 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:10.290772 kubelet[2713]: I0130 12:51:10.290709 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xfgc8" podStartSLOduration=17.290692308 podStartE2EDuration="17.290692308s" podCreationTimestamp="2025-01-30 12:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:51:10.288823785 +0000 UTC m=+31.272978566" watchObservedRunningTime="2025-01-30 12:51:10.290692308 +0000 UTC m=+31.274847049" Jan 30 12:51:10.872742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843211628.mount: Deactivated successfully. Jan 30 12:51:11.269437 kubelet[2713]: E0130 12:51:11.269404 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:11.271185 kubelet[2713]: E0130 12:51:11.269520 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:12.271217 kubelet[2713]: E0130 12:51:12.271163 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:12.271778 kubelet[2713]: E0130 12:51:12.271740 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:14.435983 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:56860.service - OpenSSH per-connection server daemon (10.0.0.1:56860). Jan 30 12:51:14.477710 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 56860 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:14.479767 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:14.484562 systemd-logind[1519]: New session 9 of user core. Jan 30 12:51:14.491062 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 12:51:14.632650 sshd[4146]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:14.636189 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:56860.service: Deactivated successfully. Jan 30 12:51:14.638892 systemd-logind[1519]: Session 9 logged out. Waiting for processes to exit. Jan 30 12:51:14.640072 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 12:51:14.641664 systemd-logind[1519]: Removed session 9. Jan 30 12:51:19.648754 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:56866.service - OpenSSH per-connection server daemon (10.0.0.1:56866). Jan 30 12:51:19.682151 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 56866 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:19.683959 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:19.689148 systemd-logind[1519]: New session 10 of user core. Jan 30 12:51:19.704064 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 12:51:19.823084 sshd[4162]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:19.827815 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:56866.service: Deactivated successfully. Jan 30 12:51:19.830870 systemd-logind[1519]: Session 10 logged out. Waiting for processes to exit. Jan 30 12:51:19.831258 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 12:51:19.832378 systemd-logind[1519]: Removed session 10. Jan 30 12:51:24.841845 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:57406.service - OpenSSH per-connection server daemon (10.0.0.1:57406). Jan 30 12:51:24.873745 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 57406 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:24.875200 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:24.879629 systemd-logind[1519]: New session 11 of user core. Jan 30 12:51:24.894003 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 12:51:25.028294 sshd[4180]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:25.039445 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:57418.service - OpenSSH per-connection server daemon (10.0.0.1:57418). Jan 30 12:51:25.039910 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:57406.service: Deactivated successfully. Jan 30 12:51:25.042187 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 12:51:25.044244 systemd-logind[1519]: Session 11 logged out. Waiting for processes to exit. Jan 30 12:51:25.047201 systemd-logind[1519]: Removed session 11. Jan 30 12:51:25.075457 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 57418 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:25.077036 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:25.081466 systemd-logind[1519]: New session 12 of user core. Jan 30 12:51:25.090956 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 12:51:25.277969 sshd[4194]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:25.292412 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:57422.service - OpenSSH per-connection server daemon (10.0.0.1:57422). Jan 30 12:51:25.292908 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:57418.service: Deactivated successfully. Jan 30 12:51:25.298409 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 12:51:25.302717 systemd-logind[1519]: Session 12 logged out. Waiting for processes to exit. Jan 30 12:51:25.306279 systemd-logind[1519]: Removed session 12. Jan 30 12:51:25.334831 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 57422 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:25.336330 sshd[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:25.340716 systemd-logind[1519]: New session 13 of user core. Jan 30 12:51:25.348931 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 12:51:25.474670 sshd[4207]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:25.479533 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:57422.service: Deactivated successfully. Jan 30 12:51:25.482801 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 12:51:25.482884 systemd-logind[1519]: Session 13 logged out. Waiting for processes to exit. Jan 30 12:51:25.484206 systemd-logind[1519]: Removed session 13. Jan 30 12:51:30.490910 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:57424.service - OpenSSH per-connection server daemon (10.0.0.1:57424). Jan 30 12:51:30.529336 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 57424 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:30.530910 sshd[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:30.544036 systemd-logind[1519]: New session 14 of user core. Jan 30 12:51:30.566056 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 12:51:30.690702 sshd[4225]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:30.696107 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:57424.service: Deactivated successfully. Jan 30 12:51:30.702073 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 12:51:30.705159 systemd-logind[1519]: Session 14 logged out. Waiting for processes to exit. Jan 30 12:51:30.712136 systemd-logind[1519]: Removed session 14. Jan 30 12:51:35.707924 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:42652.service - OpenSSH per-connection server daemon (10.0.0.1:42652). Jan 30 12:51:35.756382 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 42652 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:35.758060 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:35.765153 systemd-logind[1519]: New session 15 of user core. Jan 30 12:51:35.769903 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 12:51:35.906246 sshd[4240]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:35.909583 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:42652.service: Deactivated successfully. Jan 30 12:51:35.913396 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 12:51:35.914451 systemd-logind[1519]: Session 15 logged out. Waiting for processes to exit. Jan 30 12:51:35.924042 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:42666.service - OpenSSH per-connection server daemon (10.0.0.1:42666). Jan 30 12:51:35.924944 systemd-logind[1519]: Removed session 15. Jan 30 12:51:35.960587 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 42666 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:35.962389 sshd[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:35.966753 systemd-logind[1519]: New session 16 of user core. Jan 30 12:51:35.973033 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 12:51:36.217114 sshd[4255]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:36.225976 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:42668.service - OpenSSH per-connection server daemon (10.0.0.1:42668). Jan 30 12:51:36.226529 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:42666.service: Deactivated successfully. Jan 30 12:51:36.229623 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 12:51:36.230791 systemd-logind[1519]: Session 16 logged out. Waiting for processes to exit. Jan 30 12:51:36.233001 systemd-logind[1519]: Removed session 16. Jan 30 12:51:36.273713 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 42668 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:36.275544 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:36.280666 systemd-logind[1519]: New session 17 of user core. Jan 30 12:51:36.296919 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 12:51:37.619283 sshd[4265]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:37.630839 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:42676.service - OpenSSH per-connection server daemon (10.0.0.1:42676). Jan 30 12:51:37.631402 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:42668.service: Deactivated successfully. Jan 30 12:51:37.648387 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 12:51:37.651288 systemd-logind[1519]: Session 17 logged out. Waiting for processes to exit. Jan 30 12:51:37.652974 systemd-logind[1519]: Removed session 17. Jan 30 12:51:37.680135 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 42676 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:37.682171 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:37.687467 systemd-logind[1519]: New session 18 of user core. Jan 30 12:51:37.698954 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 12:51:38.061078 sshd[4287]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:38.070004 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:42680.service - OpenSSH per-connection server daemon (10.0.0.1:42680). Jan 30 12:51:38.070439 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:42676.service: Deactivated successfully. Jan 30 12:51:38.075412 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 12:51:38.076865 systemd-logind[1519]: Session 18 logged out. Waiting for processes to exit. Jan 30 12:51:38.078179 systemd-logind[1519]: Removed session 18. Jan 30 12:51:38.107700 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 42680 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:38.109248 sshd[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:38.113651 systemd-logind[1519]: New session 19 of user core. Jan 30 12:51:38.124915 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 12:51:38.242763 sshd[4301]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:38.247046 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:42680.service: Deactivated successfully. Jan 30 12:51:38.251863 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 12:51:38.252859 systemd-logind[1519]: Session 19 logged out. Waiting for processes to exit. Jan 30 12:51:38.253996 systemd-logind[1519]: Removed session 19. Jan 30 12:51:43.264976 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:59586.service - OpenSSH per-connection server daemon (10.0.0.1:59586). Jan 30 12:51:43.308838 sshd[4325]: Accepted publickey for core from 10.0.0.1 port 59586 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:43.309398 sshd[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:43.314482 systemd-logind[1519]: New session 20 of user core. Jan 30 12:51:43.330977 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 12:51:43.466257 sshd[4325]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:43.473012 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:59586.service: Deactivated successfully. Jan 30 12:51:43.477220 systemd-logind[1519]: Session 20 logged out. Waiting for processes to exit. Jan 30 12:51:43.477847 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 12:51:43.479376 systemd-logind[1519]: Removed session 20. Jan 30 12:51:48.476858 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:59592.service - OpenSSH per-connection server daemon (10.0.0.1:59592). Jan 30 12:51:48.512321 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 59592 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:48.514140 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:48.518708 systemd-logind[1519]: New session 21 of user core. Jan 30 12:51:48.525941 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 12:51:48.659464 sshd[4340]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:48.663759 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:59592.service: Deactivated successfully. Jan 30 12:51:48.666386 systemd-logind[1519]: Session 21 logged out. Waiting for processes to exit. Jan 30 12:51:48.666493 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 12:51:48.667481 systemd-logind[1519]: Removed session 21. Jan 30 12:51:53.676915 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:37730.service - OpenSSH per-connection server daemon (10.0.0.1:37730). Jan 30 12:51:53.707106 sshd[4358]: Accepted publickey for core from 10.0.0.1 port 37730 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:53.708514 sshd[4358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:53.713327 systemd-logind[1519]: New session 22 of user core. Jan 30 12:51:53.728109 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 12:51:53.850580 sshd[4358]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:53.856886 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:37746.service - OpenSSH per-connection server daemon (10.0.0.1:37746). Jan 30 12:51:53.857301 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:37730.service: Deactivated successfully. Jan 30 12:51:53.861035 systemd-logind[1519]: Session 22 logged out. Waiting for processes to exit. Jan 30 12:51:53.861266 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 12:51:53.865554 systemd-logind[1519]: Removed session 22. Jan 30 12:51:53.890385 sshd[4370]: Accepted publickey for core from 10.0.0.1 port 37746 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:53.891947 sshd[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:53.896080 systemd-logind[1519]: New session 23 of user core. Jan 30 12:51:53.913902 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 12:51:55.771837 kubelet[2713]: I0130 12:51:55.771702 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-znzzb" podStartSLOduration=62.771684023 podStartE2EDuration="1m2.771684023s" podCreationTimestamp="2025-01-30 12:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:51:10.332051257 +0000 UTC m=+31.316206038" watchObservedRunningTime="2025-01-30 12:51:55.771684023 +0000 UTC m=+76.755838764" Jan 30 12:51:55.785259 containerd[1542]: time="2025-01-30T12:51:55.785205136Z" level=info msg="StopContainer for \"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84\" with timeout 30 (s)" Jan 30 12:51:55.785865 containerd[1542]: time="2025-01-30T12:51:55.785714543Z" level=info msg="Stop container \"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84\" with signal terminated" Jan 30 12:51:55.829280 containerd[1542]: time="2025-01-30T12:51:55.829239205Z" level=info msg="StopContainer for \"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884\" with timeout 2 (s)" Jan 30 12:51:55.829653 containerd[1542]: time="2025-01-30T12:51:55.829458968Z" level=info msg="Stop container \"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884\" with signal terminated" Jan 30 12:51:55.836207 systemd-networkd[1228]: lxc_health: Link DOWN Jan 30 12:51:55.836213 systemd-networkd[1228]: lxc_health: Lost carrier Jan 30 12:51:55.842266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84-rootfs.mount: Deactivated successfully. Jan 30 12:51:55.855613 containerd[1542]: time="2025-01-30T12:51:55.855394259Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:51:55.858954 containerd[1542]: time="2025-01-30T12:51:55.858876109Z" level=info msg="shim disconnected" id=efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84 namespace=k8s.io Jan 30 12:51:55.858954 containerd[1542]: time="2025-01-30T12:51:55.858927670Z" level=warning msg="cleaning up after shim disconnected" id=efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84 namespace=k8s.io Jan 30 12:51:55.858954 containerd[1542]: time="2025-01-30T12:51:55.858935990Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:51:55.884183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884-rootfs.mount: Deactivated successfully. Jan 30 12:51:55.890423 containerd[1542]: time="2025-01-30T12:51:55.890140076Z" level=info msg="shim disconnected" id=0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884 namespace=k8s.io Jan 30 12:51:55.890423 containerd[1542]: time="2025-01-30T12:51:55.890194716Z" level=warning msg="cleaning up after shim disconnected" id=0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884 namespace=k8s.io Jan 30 12:51:55.890423 containerd[1542]: time="2025-01-30T12:51:55.890209637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:51:55.907020 containerd[1542]: time="2025-01-30T12:51:55.906967436Z" level=info msg="StopContainer for \"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884\" returns successfully" Jan 30 12:51:55.908372 containerd[1542]: time="2025-01-30T12:51:55.908337536Z" level=info msg="StopPodSandbox for \"e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43\"" Jan 30 12:51:55.908477 containerd[1542]: time="2025-01-30T12:51:55.908386736Z" level=info msg="Container to stop \"a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:51:55.908477 containerd[1542]: time="2025-01-30T12:51:55.908401257Z" level=info msg="Container to stop \"b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:51:55.908477 containerd[1542]: time="2025-01-30T12:51:55.908410657Z" level=info msg="Container to stop \"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:51:55.908477 containerd[1542]: time="2025-01-30T12:51:55.908421497Z" level=info msg="Container to stop \"1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:51:55.908477 containerd[1542]: time="2025-01-30T12:51:55.908431817Z" level=info msg="Container to stop \"00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:51:55.910456 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43-shm.mount: Deactivated successfully. Jan 30 12:51:55.912717 containerd[1542]: time="2025-01-30T12:51:55.912675958Z" level=info msg="StopContainer for \"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84\" returns successfully" Jan 30 12:51:55.913694 containerd[1542]: time="2025-01-30T12:51:55.913666652Z" level=info msg="StopPodSandbox for \"d09bdffc1d3d3656a03b234259c283957ec94f6b1e9af512885d14077ca3262f\"" Jan 30 12:51:55.913750 containerd[1542]: time="2025-01-30T12:51:55.913708212Z" level=info msg="Container to stop \"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:51:55.950060 containerd[1542]: time="2025-01-30T12:51:55.949258481Z" level=info msg="shim disconnected" id=d09bdffc1d3d3656a03b234259c283957ec94f6b1e9af512885d14077ca3262f namespace=k8s.io Jan 30 12:51:55.950060 containerd[1542]: time="2025-01-30T12:51:55.949319081Z" level=warning msg="cleaning up after shim disconnected" id=d09bdffc1d3d3656a03b234259c283957ec94f6b1e9af512885d14077ca3262f namespace=k8s.io Jan 30 12:51:55.950060 containerd[1542]: time="2025-01-30T12:51:55.949328642Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:51:55.950060 containerd[1542]: time="2025-01-30T12:51:55.949592085Z" level=info msg="shim disconnected" id=e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43 namespace=k8s.io Jan 30 12:51:55.950060 containerd[1542]: time="2025-01-30T12:51:55.949635686Z" level=warning msg="cleaning up after shim disconnected" id=e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43 namespace=k8s.io Jan 30 12:51:55.950060 containerd[1542]: time="2025-01-30T12:51:55.949644486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:51:55.964554 containerd[1542]: time="2025-01-30T12:51:55.964498058Z" level=info msg="TearDown network for sandbox \"d09bdffc1d3d3656a03b234259c283957ec94f6b1e9af512885d14077ca3262f\" successfully" Jan 30 12:51:55.964554 containerd[1542]: time="2025-01-30T12:51:55.964548019Z" level=info msg="StopPodSandbox for \"d09bdffc1d3d3656a03b234259c283957ec94f6b1e9af512885d14077ca3262f\" returns successfully" Jan 30 12:51:55.966384 containerd[1542]: time="2025-01-30T12:51:55.966356645Z" level=info msg="TearDown network for sandbox \"e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43\" successfully" Jan 30 12:51:55.966384 containerd[1542]: time="2025-01-30T12:51:55.966384925Z" level=info msg="StopPodSandbox for \"e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43\" returns successfully" Jan 30 12:51:56.136788 kubelet[2713]: I0130 12:51:56.136648 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0891400a-e1eb-48b8-b3ae-114768d1daf2-hubble-tls\") pod \"0891400a-e1eb-48b8-b3ae-114768d1daf2\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " Jan 30 12:51:56.136788 kubelet[2713]: I0130 12:51:56.136689 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-xtables-lock\") pod \"0891400a-e1eb-48b8-b3ae-114768d1daf2\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " Jan 30 12:51:56.136788 kubelet[2713]: I0130 12:51:56.136706 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-host-proc-sys-kernel\") pod \"0891400a-e1eb-48b8-b3ae-114768d1daf2\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " Jan 30 12:51:56.136788 kubelet[2713]: I0130 12:51:56.136726 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbq9k\" (UniqueName: \"kubernetes.io/projected/0652b5a0-a464-4ea0-9506-ebf8d523baa8-kube-api-access-pbq9k\") pod \"0652b5a0-a464-4ea0-9506-ebf8d523baa8\" (UID: \"0652b5a0-a464-4ea0-9506-ebf8d523baa8\") " Jan 30 12:51:56.136788 kubelet[2713]: I0130 12:51:56.136746 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-hostproc\") pod \"0891400a-e1eb-48b8-b3ae-114768d1daf2\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " Jan 30 12:51:56.136788 kubelet[2713]: I0130 12:51:56.136765 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-cilium-cgroup\") pod \"0891400a-e1eb-48b8-b3ae-114768d1daf2\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " Jan 30 12:51:56.137056 kubelet[2713]: I0130 12:51:56.136795 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-cni-path\") pod \"0891400a-e1eb-48b8-b3ae-114768d1daf2\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " Jan 30 12:51:56.137056 kubelet[2713]: I0130 12:51:56.136850 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-cilium-run\") pod \"0891400a-e1eb-48b8-b3ae-114768d1daf2\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " Jan 30 12:51:56.137056 kubelet[2713]: I0130 12:51:56.136869 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-host-proc-sys-net\") pod \"0891400a-e1eb-48b8-b3ae-114768d1daf2\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " Jan 30 12:51:56.137056 kubelet[2713]: I0130 12:51:56.136930 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j4zf\" (UniqueName: \"kubernetes.io/projected/0891400a-e1eb-48b8-b3ae-114768d1daf2-kube-api-access-9j4zf\") pod \"0891400a-e1eb-48b8-b3ae-114768d1daf2\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " Jan 30 12:51:56.137056 kubelet[2713]: I0130 12:51:56.136952 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-lib-modules\") pod \"0891400a-e1eb-48b8-b3ae-114768d1daf2\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " Jan 30 12:51:56.137056 kubelet[2713]: I0130 12:51:56.136973 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0891400a-e1eb-48b8-b3ae-114768d1daf2-clustermesh-secrets\") pod \"0891400a-e1eb-48b8-b3ae-114768d1daf2\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " Jan 30 12:51:56.137184 kubelet[2713]: I0130 12:51:56.136993 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0891400a-e1eb-48b8-b3ae-114768d1daf2-cilium-config-path\") pod \"0891400a-e1eb-48b8-b3ae-114768d1daf2\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " Jan 30 12:51:56.137184 kubelet[2713]: I0130 12:51:56.137014 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-bpf-maps\") pod \"0891400a-e1eb-48b8-b3ae-114768d1daf2\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " Jan 30 12:51:56.137184 kubelet[2713]: I0130 12:51:56.137031 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-etc-cni-netd\") pod \"0891400a-e1eb-48b8-b3ae-114768d1daf2\" (UID: \"0891400a-e1eb-48b8-b3ae-114768d1daf2\") " Jan 30 12:51:56.137184 kubelet[2713]: I0130 12:51:56.137048 2713 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0652b5a0-a464-4ea0-9506-ebf8d523baa8-cilium-config-path\") pod \"0652b5a0-a464-4ea0-9506-ebf8d523baa8\" (UID: \"0652b5a0-a464-4ea0-9506-ebf8d523baa8\") " Jan 30 12:51:56.145613 kubelet[2713]: I0130 12:51:56.144921 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0652b5a0-a464-4ea0-9506-ebf8d523baa8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0652b5a0-a464-4ea0-9506-ebf8d523baa8" (UID: "0652b5a0-a464-4ea0-9506-ebf8d523baa8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 12:51:56.145613 kubelet[2713]: I0130 12:51:56.144999 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0891400a-e1eb-48b8-b3ae-114768d1daf2" (UID: "0891400a-e1eb-48b8-b3ae-114768d1daf2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.145613 kubelet[2713]: I0130 12:51:56.145237 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0891400a-e1eb-48b8-b3ae-114768d1daf2" (UID: "0891400a-e1eb-48b8-b3ae-114768d1daf2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.145613 kubelet[2713]: I0130 12:51:56.145297 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-hostproc" (OuterVolumeSpecName: "hostproc") pod "0891400a-e1eb-48b8-b3ae-114768d1daf2" (UID: "0891400a-e1eb-48b8-b3ae-114768d1daf2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.145613 kubelet[2713]: I0130 12:51:56.145315 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0891400a-e1eb-48b8-b3ae-114768d1daf2" (UID: "0891400a-e1eb-48b8-b3ae-114768d1daf2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.145835 kubelet[2713]: I0130 12:51:56.145331 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-cni-path" (OuterVolumeSpecName: "cni-path") pod "0891400a-e1eb-48b8-b3ae-114768d1daf2" (UID: "0891400a-e1eb-48b8-b3ae-114768d1daf2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.145835 kubelet[2713]: I0130 12:51:56.145351 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0891400a-e1eb-48b8-b3ae-114768d1daf2" (UID: "0891400a-e1eb-48b8-b3ae-114768d1daf2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.146470 kubelet[2713]: I0130 12:51:56.146419 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0891400a-e1eb-48b8-b3ae-114768d1daf2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0891400a-e1eb-48b8-b3ae-114768d1daf2" (UID: "0891400a-e1eb-48b8-b3ae-114768d1daf2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 12:51:56.146541 kubelet[2713]: I0130 12:51:56.146485 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0891400a-e1eb-48b8-b3ae-114768d1daf2" (UID: "0891400a-e1eb-48b8-b3ae-114768d1daf2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.146541 kubelet[2713]: I0130 12:51:56.146506 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0891400a-e1eb-48b8-b3ae-114768d1daf2" (UID: "0891400a-e1eb-48b8-b3ae-114768d1daf2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.146541 kubelet[2713]: I0130 12:51:56.146524 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0891400a-e1eb-48b8-b3ae-114768d1daf2" (UID: "0891400a-e1eb-48b8-b3ae-114768d1daf2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.146541 kubelet[2713]: I0130 12:51:56.146540 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0891400a-e1eb-48b8-b3ae-114768d1daf2" (UID: "0891400a-e1eb-48b8-b3ae-114768d1daf2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.147152 kubelet[2713]: I0130 12:51:56.147075 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0891400a-e1eb-48b8-b3ae-114768d1daf2-kube-api-access-9j4zf" (OuterVolumeSpecName: "kube-api-access-9j4zf") pod "0891400a-e1eb-48b8-b3ae-114768d1daf2" (UID: "0891400a-e1eb-48b8-b3ae-114768d1daf2"). InnerVolumeSpecName "kube-api-access-9j4zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 12:51:56.147529 kubelet[2713]: I0130 12:51:56.147278 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0891400a-e1eb-48b8-b3ae-114768d1daf2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0891400a-e1eb-48b8-b3ae-114768d1daf2" (UID: "0891400a-e1eb-48b8-b3ae-114768d1daf2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 12:51:56.147691 kubelet[2713]: I0130 12:51:56.147647 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0652b5a0-a464-4ea0-9506-ebf8d523baa8-kube-api-access-pbq9k" (OuterVolumeSpecName: "kube-api-access-pbq9k") pod "0652b5a0-a464-4ea0-9506-ebf8d523baa8" (UID: "0652b5a0-a464-4ea0-9506-ebf8d523baa8"). InnerVolumeSpecName "kube-api-access-pbq9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 12:51:56.148615 kubelet[2713]: I0130 12:51:56.148560 2713 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0891400a-e1eb-48b8-b3ae-114768d1daf2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0891400a-e1eb-48b8-b3ae-114768d1daf2" (UID: "0891400a-e1eb-48b8-b3ae-114768d1daf2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 12:51:56.237416 kubelet[2713]: I0130 12:51:56.237353 2713 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9j4zf\" (UniqueName: \"kubernetes.io/projected/0891400a-e1eb-48b8-b3ae-114768d1daf2-kube-api-access-9j4zf\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237416 kubelet[2713]: I0130 12:51:56.237394 2713 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237416 kubelet[2713]: I0130 12:51:56.237404 2713 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0891400a-e1eb-48b8-b3ae-114768d1daf2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237416 kubelet[2713]: I0130 12:51:56.237416 2713 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0891400a-e1eb-48b8-b3ae-114768d1daf2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237416 kubelet[2713]: I0130 12:51:56.237424 2713 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237416 kubelet[2713]: I0130 12:51:56.237432 2713 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237416 kubelet[2713]: I0130 12:51:56.237443 2713 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0652b5a0-a464-4ea0-9506-ebf8d523baa8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237802 kubelet[2713]: I0130 12:51:56.237453 2713 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0891400a-e1eb-48b8-b3ae-114768d1daf2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237802 kubelet[2713]: I0130 12:51:56.237461 2713 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237802 kubelet[2713]: I0130 12:51:56.237470 2713 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237802 kubelet[2713]: I0130 12:51:56.237478 2713 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pbq9k\" (UniqueName: \"kubernetes.io/projected/0652b5a0-a464-4ea0-9506-ebf8d523baa8-kube-api-access-pbq9k\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237802 kubelet[2713]: I0130 12:51:56.237486 2713 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237802 kubelet[2713]: I0130 12:51:56.237494 2713 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237802 kubelet[2713]: I0130 12:51:56.237502 2713 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237802 kubelet[2713]: I0130 12:51:56.237511 2713 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.237974 kubelet[2713]: I0130 12:51:56.237520 2713 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0891400a-e1eb-48b8-b3ae-114768d1daf2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 30 12:51:56.374262 kubelet[2713]: I0130 12:51:56.374111 2713 scope.go:117] "RemoveContainer" containerID="efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84" Jan 30 12:51:56.376875 containerd[1542]: time="2025-01-30T12:51:56.376726889Z" level=info msg="RemoveContainer for \"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84\"" Jan 30 12:51:56.392626 containerd[1542]: time="2025-01-30T12:51:56.392181304Z" level=info msg="RemoveContainer for \"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84\" returns successfully" Jan 30 12:51:56.392741 kubelet[2713]: I0130 12:51:56.392517 2713 scope.go:117] "RemoveContainer" containerID="efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84" Jan 30 12:51:56.392826 containerd[1542]: time="2025-01-30T12:51:56.392784313Z" level=error msg="ContainerStatus for \"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84\": not found" Jan 30 12:51:56.402091 kubelet[2713]: E0130 12:51:56.402040 2713 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84\": not found" containerID="efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84" Jan 30 12:51:56.402199 kubelet[2713]: I0130 12:51:56.402097 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84"} err="failed to get container status \"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84\": rpc error: code = NotFound desc = an error occurred when try to find container \"efd90fd16f25135c485c07807bf666b65dbb3878848294bf7ee57ee55bc35e84\": not found" Jan 30 12:51:56.402199 kubelet[2713]: I0130 12:51:56.402193 2713 scope.go:117] "RemoveContainer" containerID="0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884" Jan 30 12:51:56.403694 containerd[1542]: time="2025-01-30T12:51:56.403648104Z" level=info msg="RemoveContainer for \"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884\"" Jan 30 12:51:56.407529 containerd[1542]: time="2025-01-30T12:51:56.407489517Z" level=info msg="RemoveContainer for \"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884\" returns successfully" Jan 30 12:51:56.407741 kubelet[2713]: I0130 12:51:56.407717 2713 scope.go:117] "RemoveContainer" containerID="b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168" Jan 30 12:51:56.409177 containerd[1542]: time="2025-01-30T12:51:56.408919057Z" level=info msg="RemoveContainer for \"b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168\"" Jan 30 12:51:56.411873 containerd[1542]: time="2025-01-30T12:51:56.411836778Z" level=info msg="RemoveContainer for \"b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168\" returns successfully" Jan 30 12:51:56.412206 kubelet[2713]: I0130 12:51:56.412177 2713 scope.go:117] "RemoveContainer" containerID="00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629" Jan 30 12:51:56.413769 containerd[1542]: time="2025-01-30T12:51:56.413735764Z" level=info msg="RemoveContainer for \"00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629\"" Jan 30 12:51:56.416923 containerd[1542]: time="2025-01-30T12:51:56.416887728Z" level=info msg="RemoveContainer for \"00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629\" returns successfully" Jan 30 12:51:56.417200 kubelet[2713]: I0130 12:51:56.417177 2713 scope.go:117] "RemoveContainer" containerID="1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3" Jan 30 12:51:56.418404 containerd[1542]: time="2025-01-30T12:51:56.418376389Z" level=info msg="RemoveContainer for \"1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3\"" Jan 30 12:51:56.422060 containerd[1542]: time="2025-01-30T12:51:56.421910958Z" level=info msg="RemoveContainer for \"1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3\" returns successfully" Jan 30 12:51:56.422342 kubelet[2713]: I0130 12:51:56.422312 2713 scope.go:117] "RemoveContainer" containerID="a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1" Jan 30 12:51:56.423558 containerd[1542]: time="2025-01-30T12:51:56.423523380Z" level=info msg="RemoveContainer for \"a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1\"" Jan 30 12:51:56.427518 containerd[1542]: time="2025-01-30T12:51:56.427482275Z" level=info msg="RemoveContainer for \"a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1\" returns successfully" Jan 30 12:51:56.427754 kubelet[2713]: I0130 12:51:56.427729 2713 scope.go:117] "RemoveContainer" containerID="0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884" Jan 30 12:51:56.428016 containerd[1542]: time="2025-01-30T12:51:56.427974242Z" level=error msg="ContainerStatus for \"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884\": not found" Jan 30 12:51:56.428113 kubelet[2713]: E0130 12:51:56.428090 2713 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884\": not found" containerID="0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884" Jan 30 12:51:56.428790 kubelet[2713]: I0130 12:51:56.428124 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884"} err="failed to get container status \"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a37a9e41457a8625ad17818590a96d258fe685244caf657214e2eed69815884\": not found" Jan 30 12:51:56.428790 kubelet[2713]: I0130 12:51:56.428730 2713 scope.go:117] "RemoveContainer" containerID="b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168" Jan 30 12:51:56.429167 containerd[1542]: time="2025-01-30T12:51:56.429128418Z" level=error msg="ContainerStatus for \"b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168\": not found" Jan 30 12:51:56.429499 kubelet[2713]: E0130 12:51:56.429334 2713 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168\": not found" containerID="b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168" Jan 30 12:51:56.429499 kubelet[2713]: I0130 12:51:56.429361 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168"} err="failed to get container status \"b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4e390f73142de30c76406d05ef4cabbd87a5286e79180d4d1a41580a5fe2168\": not found" Jan 30 12:51:56.429499 kubelet[2713]: I0130 12:51:56.429379 2713 scope.go:117] "RemoveContainer" containerID="00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629" Jan 30 12:51:56.429619 containerd[1542]: time="2025-01-30T12:51:56.429549144Z" level=error msg="ContainerStatus for \"00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629\": not found" Jan 30 12:51:56.430369 kubelet[2713]: E0130 12:51:56.430345 2713 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629\": not found" containerID="00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629" Jan 30 12:51:56.430415 kubelet[2713]: I0130 12:51:56.430374 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629"} err="failed to get container status \"00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629\": rpc error: code = NotFound desc = an error occurred when try to find container \"00d64daa246dc16aa5e136ace845b69a0568a500bc7d129d9d1fbe48d535e629\": not found" Jan 30 12:51:56.430415 kubelet[2713]: I0130 12:51:56.430391 2713 scope.go:117] "RemoveContainer" containerID="1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3" Jan 30 12:51:56.430603 containerd[1542]: time="2025-01-30T12:51:56.430555638Z" level=error msg="ContainerStatus for \"1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3\": not found" Jan 30 12:51:56.430721 kubelet[2713]: E0130 12:51:56.430698 2713 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3\": not found" containerID="1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3" Jan 30 12:51:56.430758 kubelet[2713]: I0130 12:51:56.430727 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3"} err="failed to get container status \"1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3\": rpc error: code = NotFound desc = an error occurred when try to find container \"1253d8867562cd5991308b5dac35054a50ee448374155e21c8335af28c014cc3\": not found" Jan 30 12:51:56.430758 kubelet[2713]: I0130 12:51:56.430743 2713 scope.go:117] "RemoveContainer" containerID="a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1" Jan 30 12:51:56.431000 containerd[1542]: time="2025-01-30T12:51:56.430929363Z" level=error msg="ContainerStatus for \"a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1\": not found" Jan 30 12:51:56.431107 kubelet[2713]: E0130 12:51:56.431087 2713 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1\": not found" containerID="a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1" Jan 30 12:51:56.431150 kubelet[2713]: I0130 12:51:56.431114 2713 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1"} err="failed to get container status \"a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1\": rpc error: code = NotFound desc = an error occurred when try to find container \"a2cc0870bed7cc32a22619db4736b11e5cfdf3f5ff7a1ac0f5f67981f6f03fc1\": not found" Jan 30 12:51:56.798978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d09bdffc1d3d3656a03b234259c283957ec94f6b1e9af512885d14077ca3262f-rootfs.mount: Deactivated successfully. Jan 30 12:51:56.799147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d09bdffc1d3d3656a03b234259c283957ec94f6b1e9af512885d14077ca3262f-shm.mount: Deactivated successfully. Jan 30 12:51:56.799231 systemd[1]: var-lib-kubelet-pods-0652b5a0\x2da464\x2d4ea0\x2d9506\x2debf8d523baa8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpbq9k.mount: Deactivated successfully. Jan 30 12:51:56.799317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e53f76e3f99d936c8b7b09f7b574a06be9c8a18fe6c95f09de51fbeec40a8f43-rootfs.mount: Deactivated successfully. Jan 30 12:51:56.799392 systemd[1]: var-lib-kubelet-pods-0891400a\x2de1eb\x2d48b8\x2db3ae\x2d114768d1daf2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9j4zf.mount: Deactivated successfully. Jan 30 12:51:56.799470 systemd[1]: var-lib-kubelet-pods-0891400a\x2de1eb\x2d48b8\x2db3ae\x2d114768d1daf2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 12:51:56.799545 systemd[1]: var-lib-kubelet-pods-0891400a\x2de1eb\x2d48b8\x2db3ae\x2d114768d1daf2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 12:51:57.169697 kubelet[2713]: I0130 12:51:57.169381 2713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0652b5a0-a464-4ea0-9506-ebf8d523baa8" path="/var/lib/kubelet/pods/0652b5a0-a464-4ea0-9506-ebf8d523baa8/volumes" Jan 30 12:51:57.170380 kubelet[2713]: I0130 12:51:57.169781 2713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0891400a-e1eb-48b8-b3ae-114768d1daf2" path="/var/lib/kubelet/pods/0891400a-e1eb-48b8-b3ae-114768d1daf2/volumes" Jan 30 12:51:57.723276 sshd[4370]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:57.734889 systemd[1]: Started sshd@23-10.0.0.34:22-10.0.0.1:37760.service - OpenSSH per-connection server daemon (10.0.0.1:37760). Jan 30 12:51:57.735465 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:37746.service: Deactivated successfully. Jan 30 12:51:57.738580 systemd-logind[1519]: Session 23 logged out. Waiting for processes to exit. Jan 30 12:51:57.740088 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 12:51:57.741363 systemd-logind[1519]: Removed session 23. Jan 30 12:51:57.773502 sshd[4533]: Accepted publickey for core from 10.0.0.1 port 37760 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:57.774954 sshd[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:57.780135 systemd-logind[1519]: New session 24 of user core. Jan 30 12:51:57.792037 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 12:51:58.819621 sshd[4533]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:58.830833 systemd[1]: Started sshd@24-10.0.0.34:22-10.0.0.1:37766.service - OpenSSH per-connection server daemon (10.0.0.1:37766). Jan 30 12:51:58.831242 systemd[1]: sshd@23-10.0.0.34:22-10.0.0.1:37760.service: Deactivated successfully. Jan 30 12:51:58.837830 kubelet[2713]: I0130 12:51:58.837458 2713 topology_manager.go:215] "Topology Admit Handler" podUID="114311e6-d3bb-4672-aa28-bff19d492131" podNamespace="kube-system" podName="cilium-wsbtj" Jan 30 12:51:58.840473 kubelet[2713]: E0130 12:51:58.838483 2713 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0652b5a0-a464-4ea0-9506-ebf8d523baa8" containerName="cilium-operator" Jan 30 12:51:58.840473 kubelet[2713]: E0130 12:51:58.838598 2713 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0891400a-e1eb-48b8-b3ae-114768d1daf2" containerName="clean-cilium-state" Jan 30 12:51:58.840473 kubelet[2713]: E0130 12:51:58.838607 2713 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0891400a-e1eb-48b8-b3ae-114768d1daf2" containerName="cilium-agent" Jan 30 12:51:58.840473 kubelet[2713]: E0130 12:51:58.838614 2713 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0891400a-e1eb-48b8-b3ae-114768d1daf2" containerName="mount-cgroup" Jan 30 12:51:58.840473 kubelet[2713]: E0130 12:51:58.838620 2713 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0891400a-e1eb-48b8-b3ae-114768d1daf2" containerName="apply-sysctl-overwrites" Jan 30 12:51:58.840473 kubelet[2713]: E0130 12:51:58.839423 2713 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0891400a-e1eb-48b8-b3ae-114768d1daf2" containerName="mount-bpf-fs" Jan 30 12:51:58.840473 kubelet[2713]: I0130 12:51:58.839465 2713 memory_manager.go:354] "RemoveStaleState removing state" podUID="0652b5a0-a464-4ea0-9506-ebf8d523baa8" containerName="cilium-operator" Jan 30 12:51:58.840473 kubelet[2713]: I0130 12:51:58.839473 2713 memory_manager.go:354] "RemoveStaleState removing state" podUID="0891400a-e1eb-48b8-b3ae-114768d1daf2" containerName="cilium-agent" Jan 30 12:51:58.842142 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 12:51:58.843655 systemd-logind[1519]: Session 24 logged out. Waiting for processes to exit. Jan 30 12:51:58.850301 systemd-logind[1519]: Removed session 24. Jan 30 12:51:58.897448 sshd[4547]: Accepted publickey for core from 10.0.0.1 port 37766 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:58.898987 sshd[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:58.903635 systemd-logind[1519]: New session 25 of user core. Jan 30 12:51:58.910874 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 12:51:58.957336 kubelet[2713]: I0130 12:51:58.957137 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/114311e6-d3bb-4672-aa28-bff19d492131-host-proc-sys-net\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.957336 kubelet[2713]: I0130 12:51:58.957183 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/114311e6-d3bb-4672-aa28-bff19d492131-hubble-tls\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.957336 kubelet[2713]: I0130 12:51:58.957205 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/114311e6-d3bb-4672-aa28-bff19d492131-etc-cni-netd\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.957336 kubelet[2713]: I0130 12:51:58.957222 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/114311e6-d3bb-4672-aa28-bff19d492131-lib-modules\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.957336 kubelet[2713]: I0130 12:51:58.957238 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/114311e6-d3bb-4672-aa28-bff19d492131-cni-path\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.957336 kubelet[2713]: I0130 12:51:58.957253 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/114311e6-d3bb-4672-aa28-bff19d492131-xtables-lock\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.957583 kubelet[2713]: I0130 12:51:58.957268 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/114311e6-d3bb-4672-aa28-bff19d492131-cilium-run\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.957583 kubelet[2713]: I0130 12:51:58.957286 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/114311e6-d3bb-4672-aa28-bff19d492131-hostproc\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.957974 kubelet[2713]: I0130 12:51:58.957301 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/114311e6-d3bb-4672-aa28-bff19d492131-cilium-ipsec-secrets\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.957974 kubelet[2713]: I0130 12:51:58.957817 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/114311e6-d3bb-4672-aa28-bff19d492131-cilium-config-path\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.957974 kubelet[2713]: I0130 12:51:58.957856 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/114311e6-d3bb-4672-aa28-bff19d492131-host-proc-sys-kernel\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.957974 kubelet[2713]: I0130 12:51:58.957874 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm8hs\" (UniqueName: \"kubernetes.io/projected/114311e6-d3bb-4672-aa28-bff19d492131-kube-api-access-pm8hs\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.957974 kubelet[2713]: I0130 12:51:58.957891 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/114311e6-d3bb-4672-aa28-bff19d492131-bpf-maps\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.958143 kubelet[2713]: I0130 12:51:58.957905 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/114311e6-d3bb-4672-aa28-bff19d492131-cilium-cgroup\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.958143 kubelet[2713]: I0130 12:51:58.957942 2713 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/114311e6-d3bb-4672-aa28-bff19d492131-clustermesh-secrets\") pod \"cilium-wsbtj\" (UID: \"114311e6-d3bb-4672-aa28-bff19d492131\") " pod="kube-system/cilium-wsbtj" Jan 30 12:51:58.960252 sshd[4547]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:58.970843 systemd[1]: Started sshd@25-10.0.0.34:22-10.0.0.1:37772.service - OpenSSH per-connection server daemon (10.0.0.1:37772). Jan 30 12:51:58.971237 systemd[1]: sshd@24-10.0.0.34:22-10.0.0.1:37766.service: Deactivated successfully. Jan 30 12:51:58.973553 systemd-logind[1519]: Session 25 logged out. Waiting for processes to exit. Jan 30 12:51:58.974547 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 12:51:58.976761 systemd-logind[1519]: Removed session 25. Jan 30 12:51:59.000866 sshd[4556]: Accepted publickey for core from 10.0.0.1 port 37772 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:59.002348 sshd[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:59.006932 systemd-logind[1519]: New session 26 of user core. Jan 30 12:51:59.017828 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 12:51:59.162971 kubelet[2713]: E0130 12:51:59.162924 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:59.163978 containerd[1542]: time="2025-01-30T12:51:59.163423739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wsbtj,Uid:114311e6-d3bb-4672-aa28-bff19d492131,Namespace:kube-system,Attempt:0,}" Jan 30 12:51:59.167285 kubelet[2713]: E0130 12:51:59.167215 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:59.217988 kubelet[2713]: E0130 12:51:59.217942 2713 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 12:51:59.258844 containerd[1542]: time="2025-01-30T12:51:59.258610163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:51:59.258844 containerd[1542]: time="2025-01-30T12:51:59.258658884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:51:59.258844 containerd[1542]: time="2025-01-30T12:51:59.258670244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:59.258844 containerd[1542]: time="2025-01-30T12:51:59.258757765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:59.297536 containerd[1542]: time="2025-01-30T12:51:59.297495583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wsbtj,Uid:114311e6-d3bb-4672-aa28-bff19d492131,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c77a397f9ba47fc571c5d23e4ebbe9cab452773234e343e2cd2566e90428cac\"" Jan 30 12:51:59.298274 kubelet[2713]: E0130 12:51:59.298228 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:59.300771 containerd[1542]: time="2025-01-30T12:51:59.300732545Z" level=info msg="CreateContainer within sandbox \"2c77a397f9ba47fc571c5d23e4ebbe9cab452773234e343e2cd2566e90428cac\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 12:51:59.316551 containerd[1542]: time="2025-01-30T12:51:59.316497187Z" level=info msg="CreateContainer within sandbox \"2c77a397f9ba47fc571c5d23e4ebbe9cab452773234e343e2cd2566e90428cac\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"150df3478d85496a1d10888c5181861f2c19e207409034445137faa651ae7012\"" Jan 30 12:51:59.317013 containerd[1542]: time="2025-01-30T12:51:59.316983154Z" level=info msg="StartContainer for \"150df3478d85496a1d10888c5181861f2c19e207409034445137faa651ae7012\"" Jan 30 12:51:59.375888 containerd[1542]: time="2025-01-30T12:51:59.375763269Z" level=info msg="StartContainer for \"150df3478d85496a1d10888c5181861f2c19e207409034445137faa651ae7012\" returns successfully" Jan 30 12:51:59.391848 kubelet[2713]: E0130 12:51:59.391431 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:59.423290 containerd[1542]: time="2025-01-30T12:51:59.423164039Z" level=info msg="shim disconnected" id=150df3478d85496a1d10888c5181861f2c19e207409034445137faa651ae7012 namespace=k8s.io Jan 30 12:51:59.423290 containerd[1542]: time="2025-01-30T12:51:59.423224480Z" level=warning msg="cleaning up after shim disconnected" id=150df3478d85496a1d10888c5181861f2c19e207409034445137faa651ae7012 namespace=k8s.io Jan 30 12:51:59.423290 containerd[1542]: time="2025-01-30T12:51:59.423234080Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:52:00.394087 kubelet[2713]: E0130 12:52:00.394048 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:00.397480 containerd[1542]: time="2025-01-30T12:52:00.397236553Z" level=info msg="CreateContainer within sandbox \"2c77a397f9ba47fc571c5d23e4ebbe9cab452773234e343e2cd2566e90428cac\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 12:52:00.411717 containerd[1542]: time="2025-01-30T12:52:00.411665774Z" level=info msg="CreateContainer within sandbox \"2c77a397f9ba47fc571c5d23e4ebbe9cab452773234e343e2cd2566e90428cac\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"29378a70d6e9662142603218483350c72ac7d569fc822d3d8ad932655046ca82\"" Jan 30 12:52:00.412239 containerd[1542]: time="2025-01-30T12:52:00.412193901Z" level=info msg="StartContainer for \"29378a70d6e9662142603218483350c72ac7d569fc822d3d8ad932655046ca82\"" Jan 30 12:52:00.413500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount652960193.mount: Deactivated successfully. Jan 30 12:52:00.459479 containerd[1542]: time="2025-01-30T12:52:00.459413812Z" level=info msg="StartContainer for \"29378a70d6e9662142603218483350c72ac7d569fc822d3d8ad932655046ca82\" returns successfully" Jan 30 12:52:00.494043 containerd[1542]: time="2025-01-30T12:52:00.493974445Z" level=info msg="shim disconnected" id=29378a70d6e9662142603218483350c72ac7d569fc822d3d8ad932655046ca82 namespace=k8s.io Jan 30 12:52:00.494043 containerd[1542]: time="2025-01-30T12:52:00.494032726Z" level=warning msg="cleaning up after shim disconnected" id=29378a70d6e9662142603218483350c72ac7d569fc822d3d8ad932655046ca82 namespace=k8s.io Jan 30 12:52:00.494043 containerd[1542]: time="2025-01-30T12:52:00.494041726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:52:00.866434 kubelet[2713]: I0130 12:52:00.866371 2713 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T12:52:00Z","lastTransitionTime":"2025-01-30T12:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 12:52:01.063559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29378a70d6e9662142603218483350c72ac7d569fc822d3d8ad932655046ca82-rootfs.mount: Deactivated successfully. Jan 30 12:52:01.396655 kubelet[2713]: E0130 12:52:01.396621 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:01.400281 containerd[1542]: time="2025-01-30T12:52:01.400145350Z" level=info msg="CreateContainer within sandbox \"2c77a397f9ba47fc571c5d23e4ebbe9cab452773234e343e2cd2566e90428cac\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 12:52:01.417334 containerd[1542]: time="2025-01-30T12:52:01.417265119Z" level=info msg="CreateContainer within sandbox \"2c77a397f9ba47fc571c5d23e4ebbe9cab452773234e343e2cd2566e90428cac\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5d2b86d9e1e6765177891f65c84649a73efc1e9d939540d3a1505675af8f7f2c\"" Jan 30 12:52:01.419513 containerd[1542]: time="2025-01-30T12:52:01.419362304Z" level=info msg="StartContainer for \"5d2b86d9e1e6765177891f65c84649a73efc1e9d939540d3a1505675af8f7f2c\"" Jan 30 12:52:01.481439 containerd[1542]: time="2025-01-30T12:52:01.481348061Z" level=info msg="StartContainer for \"5d2b86d9e1e6765177891f65c84649a73efc1e9d939540d3a1505675af8f7f2c\" returns successfully" Jan 30 12:52:01.509046 containerd[1542]: time="2025-01-30T12:52:01.508982158Z" level=info msg="shim disconnected" id=5d2b86d9e1e6765177891f65c84649a73efc1e9d939540d3a1505675af8f7f2c namespace=k8s.io Jan 30 12:52:01.509046 containerd[1542]: time="2025-01-30T12:52:01.509046199Z" level=warning msg="cleaning up after shim disconnected" id=5d2b86d9e1e6765177891f65c84649a73efc1e9d939540d3a1505675af8f7f2c namespace=k8s.io Jan 30 12:52:01.509247 containerd[1542]: time="2025-01-30T12:52:01.509055479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:52:02.064281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d2b86d9e1e6765177891f65c84649a73efc1e9d939540d3a1505675af8f7f2c-rootfs.mount: Deactivated successfully. Jan 30 12:52:02.167085 kubelet[2713]: E0130 12:52:02.166676 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:02.405376 kubelet[2713]: E0130 12:52:02.400470 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:02.409516 containerd[1542]: time="2025-01-30T12:52:02.402962867Z" level=info msg="CreateContainer within sandbox \"2c77a397f9ba47fc571c5d23e4ebbe9cab452773234e343e2cd2566e90428cac\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 12:52:02.439425 containerd[1542]: time="2025-01-30T12:52:02.439078616Z" level=info msg="CreateContainer within sandbox \"2c77a397f9ba47fc571c5d23e4ebbe9cab452773234e343e2cd2566e90428cac\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1ab8e68e31e0e5916b44dc85219666a7e4209f0aa63695c7677fe7e06b7df5f3\"" Jan 30 12:52:02.439605 containerd[1542]: time="2025-01-30T12:52:02.439540342Z" level=info msg="StartContainer for \"1ab8e68e31e0e5916b44dc85219666a7e4209f0aa63695c7677fe7e06b7df5f3\"" Jan 30 12:52:02.494909 containerd[1542]: time="2025-01-30T12:52:02.494853480Z" level=info msg="StartContainer for \"1ab8e68e31e0e5916b44dc85219666a7e4209f0aa63695c7677fe7e06b7df5f3\" returns successfully" Jan 30 12:52:02.512877 containerd[1542]: time="2025-01-30T12:52:02.512814053Z" level=info msg="shim disconnected" id=1ab8e68e31e0e5916b44dc85219666a7e4209f0aa63695c7677fe7e06b7df5f3 namespace=k8s.io Jan 30 12:52:02.512877 containerd[1542]: time="2025-01-30T12:52:02.512870214Z" level=warning msg="cleaning up after shim disconnected" id=1ab8e68e31e0e5916b44dc85219666a7e4209f0aa63695c7677fe7e06b7df5f3 namespace=k8s.io Jan 30 12:52:02.512877 containerd[1542]: time="2025-01-30T12:52:02.512879054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:52:03.073991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ab8e68e31e0e5916b44dc85219666a7e4209f0aa63695c7677fe7e06b7df5f3-rootfs.mount: Deactivated successfully. Jan 30 12:52:03.407071 kubelet[2713]: E0130 12:52:03.405941 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:03.413994 containerd[1542]: time="2025-01-30T12:52:03.413942570Z" level=info msg="CreateContainer within sandbox \"2c77a397f9ba47fc571c5d23e4ebbe9cab452773234e343e2cd2566e90428cac\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 12:52:03.429991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907271836.mount: Deactivated successfully. Jan 30 12:52:03.431599 containerd[1542]: time="2025-01-30T12:52:03.431302771Z" level=info msg="CreateContainer within sandbox \"2c77a397f9ba47fc571c5d23e4ebbe9cab452773234e343e2cd2566e90428cac\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d14bd9a43cec0f814e0c306ba09ff23422d591b044b9c01558b38a6534ee3746\"" Jan 30 12:52:03.432671 containerd[1542]: time="2025-01-30T12:52:03.432638826Z" level=info msg="StartContainer for \"d14bd9a43cec0f814e0c306ba09ff23422d591b044b9c01558b38a6534ee3746\"" Jan 30 12:52:03.493763 containerd[1542]: time="2025-01-30T12:52:03.493679854Z" level=info msg="StartContainer for \"d14bd9a43cec0f814e0c306ba09ff23422d591b044b9c01558b38a6534ee3746\" returns successfully" Jan 30 12:52:03.769653 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 12:52:04.411100 kubelet[2713]: E0130 12:52:04.411007 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:05.413737 kubelet[2713]: E0130 12:52:05.413622 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:06.726503 systemd-networkd[1228]: lxc_health: Link UP Jan 30 12:52:06.736155 systemd-networkd[1228]: lxc_health: Gained carrier Jan 30 12:52:07.165752 kubelet[2713]: E0130 12:52:07.165708 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:07.216597 kubelet[2713]: I0130 12:52:07.216290 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wsbtj" podStartSLOduration=9.21627447 podStartE2EDuration="9.21627447s" podCreationTimestamp="2025-01-30 12:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:52:04.427144995 +0000 UTC m=+85.411299736" watchObservedRunningTime="2025-01-30 12:52:07.21627447 +0000 UTC m=+88.200429251" Jan 30 12:52:07.416631 kubelet[2713]: E0130 12:52:07.416365 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:07.562162 systemd[1]: run-containerd-runc-k8s.io-d14bd9a43cec0f814e0c306ba09ff23422d591b044b9c01558b38a6534ee3746-runc.XeqUJd.mount: Deactivated successfully. Jan 30 12:52:08.166710 kubelet[2713]: E0130 12:52:08.166662 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:08.169699 systemd-networkd[1228]: lxc_health: Gained IPv6LL Jan 30 12:52:08.417942 kubelet[2713]: E0130 12:52:08.417809 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:11.871022 sshd[4556]: pam_unix(sshd:session): session closed for user core Jan 30 12:52:11.874677 systemd[1]: sshd@25-10.0.0.34:22-10.0.0.1:37772.service: Deactivated successfully. Jan 30 12:52:11.876933 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 12:52:11.877985 systemd-logind[1519]: Session 26 logged out. Waiting for processes to exit. Jan 30 12:52:11.879044 systemd-logind[1519]: Removed session 26.