Jan 29 11:04:20.911294 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:04:20.911318 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:37:00 -00 2025 Jan 29 11:04:20.911328 kernel: KASLR enabled Jan 29 11:04:20.911333 kernel: efi: EFI v2.7 by EDK II Jan 29 11:04:20.911339 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 29 11:04:20.911344 kernel: random: crng init done Jan 29 11:04:20.911351 kernel: secureboot: Secure boot disabled Jan 29 11:04:20.911357 kernel: ACPI: Early table checksum verification disabled Jan 29 11:04:20.911363 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 29 11:04:20.911371 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:04:20.911377 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:04:20.911383 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:04:20.911389 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:04:20.911395 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:04:20.911402 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:04:20.911410 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:04:20.911416 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:04:20.911427 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:04:20.911434 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:04:20.911440 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 11:04:20.911446 kernel: NUMA: Failed to initialise from firmware Jan 29 11:04:20.911452 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:04:20.911459 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] Jan 29 11:04:20.911465 kernel: Zone ranges: Jan 29 11:04:20.911472 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:04:20.911479 kernel: DMA32 empty Jan 29 11:04:20.911485 kernel: Normal empty Jan 29 11:04:20.911491 kernel: Movable zone start for each node Jan 29 11:04:20.911498 kernel: Early memory node ranges Jan 29 11:04:20.911504 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 29 11:04:20.911510 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 11:04:20.911517 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 11:04:20.911523 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 11:04:20.911529 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 11:04:20.911535 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 11:04:20.911541 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 11:04:20.911548 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:04:20.911556 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 11:04:20.911562 kernel: psci: probing for conduit method from ACPI. Jan 29 11:04:20.911569 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:04:20.911579 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:04:20.911585 kernel: psci: Trusted OS migration not required Jan 29 11:04:20.911592 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:04:20.911601 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:04:20.911608 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:04:20.911614 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:04:20.911621 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 11:04:20.911628 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:04:20.911635 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:04:20.911642 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:04:20.911648 kernel: CPU features: detected: Spectre-v4 Jan 29 11:04:20.911663 kernel: CPU features: detected: Spectre-BHB Jan 29 11:04:20.911670 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:04:20.911689 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:04:20.911697 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:04:20.911704 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:04:20.911711 kernel: alternatives: applying boot alternatives Jan 29 11:04:20.911719 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:04:20.911726 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:04:20.911733 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:04:20.911740 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:04:20.911747 kernel: Fallback order for Node 0: 0 Jan 29 11:04:20.911754 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 11:04:20.911760 kernel: Policy zone: DMA Jan 29 11:04:20.911769 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:04:20.911776 kernel: software IO TLB: area num 4. Jan 29 11:04:20.911783 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 11:04:20.911790 kernel: Memory: 2386328K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 185960K reserved, 0K cma-reserved) Jan 29 11:04:20.911797 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:04:20.911804 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:04:20.911811 kernel: rcu: RCU event tracing is enabled. Jan 29 11:04:20.911819 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:04:20.911826 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:04:20.911832 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:04:20.911839 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:04:20.911846 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:04:20.911855 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:04:20.911861 kernel: GICv3: 256 SPIs implemented Jan 29 11:04:20.911868 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:04:20.911875 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:04:20.911882 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:04:20.911889 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:04:20.911896 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:04:20.911902 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:04:20.911909 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:04:20.911916 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 11:04:20.911923 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 11:04:20.911932 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:04:20.911939 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:04:20.911945 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:04:20.911952 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:04:20.911959 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:04:20.911966 kernel: arm-pv: using stolen time PV Jan 29 11:04:20.911973 kernel: Console: colour dummy device 80x25 Jan 29 11:04:20.911980 kernel: ACPI: Core revision 20230628 Jan 29 11:04:20.911987 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:04:20.911994 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:04:20.912003 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:04:20.912010 kernel: landlock: Up and running. Jan 29 11:04:20.912017 kernel: SELinux: Initializing. Jan 29 11:04:20.912024 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:04:20.912031 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:04:20.912038 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:04:20.912045 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:04:20.912052 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:04:20.912059 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:04:20.912068 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:04:20.912075 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:04:20.912082 kernel: Remapping and enabling EFI services. Jan 29 11:04:20.912089 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:04:20.912096 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:04:20.912103 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:04:20.912110 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 11:04:20.912117 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:04:20.912124 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:04:20.912131 kernel: Detected PIPT I-cache on CPU2 Jan 29 11:04:20.912139 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 11:04:20.912146 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 11:04:20.912158 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:04:20.912167 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 11:04:20.912175 kernel: Detected PIPT I-cache on CPU3 Jan 29 11:04:20.912182 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 11:04:20.912189 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 11:04:20.912197 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:04:20.912204 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 11:04:20.912213 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:04:20.912220 kernel: SMP: Total of 4 processors activated. Jan 29 11:04:20.912227 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:04:20.912235 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:04:20.912242 kernel: CPU features: detected: Common not Private translations Jan 29 11:04:20.912250 kernel: CPU features: detected: CRC32 instructions Jan 29 11:04:20.912257 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:04:20.912264 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:04:20.912274 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:04:20.912281 kernel: CPU features: detected: Privileged Access Never Jan 29 11:04:20.912288 kernel: CPU features: detected: RAS Extension Support Jan 29 11:04:20.912296 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:04:20.912303 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:04:20.912310 kernel: alternatives: applying system-wide alternatives Jan 29 11:04:20.912317 kernel: devtmpfs: initialized Jan 29 11:04:20.912325 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:04:20.912332 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:04:20.912341 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:04:20.912348 kernel: SMBIOS 3.0.0 present. Jan 29 11:04:20.912356 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 29 11:04:20.912363 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:04:20.912371 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:04:20.912382 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:04:20.912391 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:04:20.912398 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:04:20.912406 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 29 11:04:20.912414 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:04:20.912422 kernel: cpuidle: using governor menu Jan 29 11:04:20.912430 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:04:20.912437 kernel: ASID allocator initialised with 32768 entries Jan 29 11:04:20.912446 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:04:20.912454 kernel: Serial: AMBA PL011 UART driver Jan 29 11:04:20.912461 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:04:20.912468 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:04:20.912476 kernel: Modules: 508960 pages in range for PLT usage Jan 29 11:04:20.912485 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:04:20.912492 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:04:20.912500 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:04:20.912507 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:04:20.912514 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:04:20.912521 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:04:20.912529 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:04:20.912536 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:04:20.912543 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:04:20.912552 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:04:20.912559 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:04:20.912567 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:04:20.912574 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:04:20.912581 kernel: ACPI: Interpreter enabled Jan 29 11:04:20.912589 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:04:20.912596 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:04:20.912604 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:04:20.912611 kernel: printk: console [ttyAMA0] enabled Jan 29 11:04:20.912620 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:04:20.912807 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:04:20.912901 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:04:20.912971 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:04:20.913038 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:04:20.913103 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:04:20.913112 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:04:20.913123 kernel: PCI host bridge to bus 0000:00 Jan 29 11:04:20.913196 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:04:20.913259 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:04:20.913319 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:04:20.913382 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:04:20.913465 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:04:20.913550 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:04:20.913622 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 11:04:20.913713 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 11:04:20.913782 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:04:20.913857 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:04:20.913922 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 11:04:20.913988 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 11:04:20.914049 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:04:20.914110 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:04:20.914168 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:04:20.914178 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:04:20.914185 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:04:20.914192 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:04:20.914199 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:04:20.914206 kernel: iommu: Default domain type: Translated Jan 29 11:04:20.914214 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:04:20.914223 kernel: efivars: Registered efivars operations Jan 29 11:04:20.914231 kernel: vgaarb: loaded Jan 29 11:04:20.914238 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:04:20.914245 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:04:20.914252 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:04:20.914259 kernel: pnp: PnP ACPI init Jan 29 11:04:20.914333 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:04:20.914343 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:04:20.914352 kernel: NET: Registered PF_INET protocol family Jan 29 11:04:20.914360 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:04:20.914367 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:04:20.914375 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:04:20.914382 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:04:20.914389 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:04:20.914396 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:04:20.914403 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:04:20.914411 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:04:20.914420 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:04:20.914427 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:04:20.914434 kernel: kvm [1]: HYP mode not available Jan 29 11:04:20.914464 kernel: Initialise system trusted keyrings Jan 29 11:04:20.914478 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:04:20.914485 kernel: Key type asymmetric registered Jan 29 11:04:20.914492 kernel: Asymmetric key parser 'x509' registered Jan 29 11:04:20.914500 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:04:20.914507 kernel: io scheduler mq-deadline registered Jan 29 11:04:20.914516 kernel: io scheduler kyber registered Jan 29 11:04:20.914524 kernel: io scheduler bfq registered Jan 29 11:04:20.914531 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:04:20.914538 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:04:20.914546 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:04:20.914611 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 11:04:20.914621 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:04:20.914628 kernel: thunder_xcv, ver 1.0 Jan 29 11:04:20.914635 kernel: thunder_bgx, ver 1.0 Jan 29 11:04:20.914645 kernel: nicpf, ver 1.0 Jan 29 11:04:20.914661 kernel: nicvf, ver 1.0 Jan 29 11:04:20.914750 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:04:20.914831 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:04:20 UTC (1738148660) Jan 29 11:04:20.914841 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:04:20.914849 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:04:20.914856 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:04:20.914863 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:04:20.914873 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:04:20.914881 kernel: Segment Routing with IPv6 Jan 29 11:04:20.914888 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:04:20.914895 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:04:20.914902 kernel: Key type dns_resolver registered Jan 29 11:04:20.914909 kernel: registered taskstats version 1 Jan 29 11:04:20.914917 kernel: Loading compiled-in X.509 certificates Jan 29 11:04:20.914924 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f3333311a24aa8c58222f4e98a07eaa1f186ad1a' Jan 29 11:04:20.914932 kernel: Key type .fscrypt registered Jan 29 11:04:20.914941 kernel: Key type fscrypt-provisioning registered Jan 29 11:04:20.914948 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:04:20.914956 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:04:20.914963 kernel: ima: No architecture policies found Jan 29 11:04:20.914970 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:04:20.914978 kernel: clk: Disabling unused clocks Jan 29 11:04:20.914985 kernel: Freeing unused kernel memory: 39680K Jan 29 11:04:20.914992 kernel: Run /init as init process Jan 29 11:04:20.914999 kernel: with arguments: Jan 29 11:04:20.915009 kernel: /init Jan 29 11:04:20.915016 kernel: with environment: Jan 29 11:04:20.915023 kernel: HOME=/ Jan 29 11:04:20.915030 kernel: TERM=linux Jan 29 11:04:20.915037 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:04:20.915046 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:04:20.915055 systemd[1]: Detected virtualization kvm. Jan 29 11:04:20.915065 systemd[1]: Detected architecture arm64. Jan 29 11:04:20.915072 systemd[1]: Running in initrd. Jan 29 11:04:20.915080 systemd[1]: No hostname configured, using default hostname. Jan 29 11:04:20.915087 systemd[1]: Hostname set to . Jan 29 11:04:20.915095 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:04:20.915103 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:04:20.915110 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:04:20.915120 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:04:20.915129 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:04:20.915138 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:04:20.915146 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:04:20.915154 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:04:20.915163 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:04:20.915171 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:04:20.915179 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:04:20.915188 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:04:20.915196 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:04:20.915204 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:04:20.915211 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:04:20.915219 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:04:20.915227 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:04:20.915234 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:04:20.915242 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:04:20.915250 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:04:20.915259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:04:20.915267 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:04:20.915275 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:04:20.915283 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:04:20.915291 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:04:20.915298 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:04:20.915306 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:04:20.915314 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:04:20.915321 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:04:20.915331 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:04:20.915339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:04:20.915346 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:04:20.915354 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:04:20.915362 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:04:20.915391 systemd-journald[239]: Collecting audit messages is disabled. Jan 29 11:04:20.915411 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:04:20.915419 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:04:20.915429 kernel: Bridge firewalling registered Jan 29 11:04:20.915436 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:04:20.915445 systemd-journald[239]: Journal started Jan 29 11:04:20.915463 systemd-journald[239]: Runtime Journal (/run/log/journal/67885c338a344764807f5a96519e0261) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:04:20.899775 systemd-modules-load[240]: Inserted module 'overlay' Jan 29 11:04:20.917505 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:04:20.913277 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 29 11:04:20.919095 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:04:20.920236 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:04:20.924753 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:04:20.926283 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:04:20.928879 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:04:20.931217 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:04:20.938296 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:04:20.942152 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:04:20.946368 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:04:20.947518 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:04:20.956897 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:04:20.958866 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:04:20.969013 dracut-cmdline[278]: dracut-dracut-053 Jan 29 11:04:20.971641 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:04:20.989925 systemd-resolved[279]: Positive Trust Anchors: Jan 29 11:04:20.990000 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:04:20.990030 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:04:20.994700 systemd-resolved[279]: Defaulting to hostname 'linux'. Jan 29 11:04:20.997781 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:04:20.999300 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:04:21.046709 kernel: SCSI subsystem initialized Jan 29 11:04:21.050695 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:04:21.058709 kernel: iscsi: registered transport (tcp) Jan 29 11:04:21.071711 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:04:21.071727 kernel: QLogic iSCSI HBA Driver Jan 29 11:04:21.114872 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:04:21.122893 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:04:21.140085 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:04:21.140146 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:04:21.140172 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:04:21.191708 kernel: raid6: neonx8 gen() 15766 MB/s Jan 29 11:04:21.208696 kernel: raid6: neonx4 gen() 15651 MB/s Jan 29 11:04:21.225695 kernel: raid6: neonx2 gen() 13331 MB/s Jan 29 11:04:21.242694 kernel: raid6: neonx1 gen() 10486 MB/s Jan 29 11:04:21.259693 kernel: raid6: int64x8 gen() 6960 MB/s Jan 29 11:04:21.276693 kernel: raid6: int64x4 gen() 7346 MB/s Jan 29 11:04:21.293696 kernel: raid6: int64x2 gen() 6131 MB/s Jan 29 11:04:21.310701 kernel: raid6: int64x1 gen() 5049 MB/s Jan 29 11:04:21.310723 kernel: raid6: using algorithm neonx8 gen() 15766 MB/s Jan 29 11:04:21.327705 kernel: raid6: .... xor() 11929 MB/s, rmw enabled Jan 29 11:04:21.327719 kernel: raid6: using neon recovery algorithm Jan 29 11:04:21.332698 kernel: xor: measuring software checksum speed Jan 29 11:04:21.332718 kernel: 8regs : 19769 MB/sec Jan 29 11:04:21.333694 kernel: 32regs : 18235 MB/sec Jan 29 11:04:21.333707 kernel: arm64_neon : 26945 MB/sec Jan 29 11:04:21.333716 kernel: xor: using function: arm64_neon (26945 MB/sec) Jan 29 11:04:21.385711 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:04:21.398301 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:04:21.414870 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:04:21.426714 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 29 11:04:21.429974 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:04:21.441879 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:04:21.453914 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jan 29 11:04:21.484712 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:04:21.497846 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:04:21.538243 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:04:21.546908 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:04:21.563525 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:04:21.565310 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:04:21.567761 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:04:21.568700 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:04:21.576913 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:04:21.592728 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 11:04:21.603882 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:04:21.603998 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:04:21.604010 kernel: GPT:9289727 != 19775487 Jan 29 11:04:21.604020 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:04:21.604029 kernel: GPT:9289727 != 19775487 Jan 29 11:04:21.604038 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:04:21.604054 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:04:21.592899 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:04:21.603069 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:04:21.603184 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:04:21.605874 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:04:21.607225 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:04:21.607437 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:04:21.609131 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:04:21.618109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:04:21.631644 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:04:21.635704 kernel: BTRFS: device fsid b5bc7ecc-f31a-46c7-9582-5efca7819025 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (508) Jan 29 11:04:21.635748 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (524) Jan 29 11:04:21.636422 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:04:21.637704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:04:21.648353 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:04:21.649317 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:04:21.654374 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:04:21.669853 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:04:21.672029 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:04:21.676979 disk-uuid[551]: Primary Header is updated. Jan 29 11:04:21.676979 disk-uuid[551]: Secondary Entries is updated. Jan 29 11:04:21.676979 disk-uuid[551]: Secondary Header is updated. Jan 29 11:04:21.679703 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:04:21.699807 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:04:22.697739 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:04:22.698236 disk-uuid[552]: The operation has completed successfully. Jan 29 11:04:22.730641 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:04:22.730789 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:04:22.743899 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:04:22.746731 sh[571]: Success Jan 29 11:04:22.763186 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:04:22.807655 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:04:22.809302 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:04:22.810091 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:04:22.822313 kernel: BTRFS info (device dm-0): first mount of filesystem b5bc7ecc-f31a-46c7-9582-5efca7819025 Jan 29 11:04:22.822357 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:04:22.822377 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:04:22.822396 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:04:22.822902 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:04:22.827028 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:04:22.828250 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:04:22.838891 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:04:22.840324 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:04:22.849949 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:04:22.850007 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:04:22.850019 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:04:22.853702 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:04:22.861408 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:04:22.863695 kernel: BTRFS info (device vda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:04:22.870055 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:04:22.877895 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:04:22.940633 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:04:22.950921 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:04:22.969207 systemd-networkd[763]: lo: Link UP Jan 29 11:04:22.969221 systemd-networkd[763]: lo: Gained carrier Jan 29 11:04:22.970069 systemd-networkd[763]: Enumeration completed Jan 29 11:04:22.970499 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:04:22.970500 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:04:22.970502 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:04:22.972418 systemd[1]: Reached target network.target - Network. Jan 29 11:04:22.976028 systemd-networkd[763]: eth0: Link UP Jan 29 11:04:22.976032 systemd-networkd[763]: eth0: Gained carrier Jan 29 11:04:22.976040 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:04:22.979541 ignition[664]: Ignition 2.20.0 Jan 29 11:04:22.979589 ignition[664]: Stage: fetch-offline Jan 29 11:04:22.979631 ignition[664]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:04:22.979641 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:04:22.979868 ignition[664]: parsed url from cmdline: "" Jan 29 11:04:22.979872 ignition[664]: no config URL provided Jan 29 11:04:22.979877 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:04:22.979885 ignition[664]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:04:22.979916 ignition[664]: op(1): [started] loading QEMU firmware config module Jan 29 11:04:22.979920 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:04:22.987180 ignition[664]: op(1): [finished] loading QEMU firmware config module Jan 29 11:04:23.003732 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:04:23.026228 ignition[664]: parsing config with SHA512: bf9e0a695418b58a167aee7dece4fe0318243c4b0834fefd6e4dd177a034be5db98b4be0777115d84662a1a7ef6dc790a95461766e6d57c153fb7e89c59dfde8 Jan 29 11:04:23.033600 unknown[664]: fetched base config from "system" Jan 29 11:04:23.033613 unknown[664]: fetched user config from "qemu" Jan 29 11:04:23.034130 ignition[664]: fetch-offline: fetch-offline passed Jan 29 11:04:23.034219 ignition[664]: Ignition finished successfully Jan 29 11:04:23.035719 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:04:23.037114 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:04:23.037793 systemd-resolved[279]: Detected conflict on linux IN A 10.0.0.81 Jan 29 11:04:23.037801 systemd-resolved[279]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Jan 29 11:04:23.041884 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:04:23.052953 ignition[769]: Ignition 2.20.0 Jan 29 11:04:23.052966 ignition[769]: Stage: kargs Jan 29 11:04:23.053144 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:04:23.053153 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:04:23.054086 ignition[769]: kargs: kargs passed Jan 29 11:04:23.054137 ignition[769]: Ignition finished successfully Jan 29 11:04:23.057714 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:04:23.067893 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:04:23.077743 ignition[778]: Ignition 2.20.0 Jan 29 11:04:23.077756 ignition[778]: Stage: disks Jan 29 11:04:23.077928 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:04:23.077938 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:04:23.078873 ignition[778]: disks: disks passed Jan 29 11:04:23.078930 ignition[778]: Ignition finished successfully Jan 29 11:04:23.081067 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:04:23.082053 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:04:23.083205 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:04:23.084702 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:04:23.086095 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:04:23.087331 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:04:23.089398 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:04:23.103119 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:04:23.107856 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:04:23.115855 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:04:23.156704 kernel: EXT4-fs (vda9): mounted filesystem bd47c032-97f4-4b3a-b174-3601de374086 r/w with ordered data mode. Quota mode: none. Jan 29 11:04:23.157361 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:04:23.158461 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:04:23.168780 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:04:23.171393 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:04:23.172236 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:04:23.172277 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:04:23.172299 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:04:23.177895 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:04:23.179883 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:04:23.182100 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) Jan 29 11:04:23.183705 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:04:23.183732 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:04:23.183743 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:04:23.186715 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:04:23.188115 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:04:23.228700 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:04:23.232931 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:04:23.236794 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:04:23.240745 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:04:23.316844 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:04:23.326830 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:04:23.328254 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:04:23.332704 kernel: BTRFS info (device vda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:04:23.350194 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:04:23.351519 ignition[909]: INFO : Ignition 2.20.0 Jan 29 11:04:23.351519 ignition[909]: INFO : Stage: mount Jan 29 11:04:23.351519 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:04:23.351519 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:04:23.354083 ignition[909]: INFO : mount: mount passed Jan 29 11:04:23.354083 ignition[909]: INFO : Ignition finished successfully Jan 29 11:04:23.353907 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:04:23.360818 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:04:23.820791 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:04:23.836885 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:04:23.843072 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) Jan 29 11:04:23.843114 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:04:23.843125 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:04:23.843762 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:04:23.846718 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:04:23.847228 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:04:23.871604 ignition[943]: INFO : Ignition 2.20.0 Jan 29 11:04:23.871604 ignition[943]: INFO : Stage: files Jan 29 11:04:23.872885 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:04:23.872885 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:04:23.872885 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:04:23.875325 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:04:23.875325 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:04:23.877948 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:04:23.879017 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:04:23.880197 unknown[943]: wrote ssh authorized keys file for user: core Jan 29 11:04:23.881089 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:04:23.882759 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:04:23.884187 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 11:04:24.025777 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:04:24.157801 systemd-networkd[763]: eth0: Gained IPv6LL Jan 29 11:04:24.381169 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:04:24.381169 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:04:24.384216 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 11:04:24.598927 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:04:24.645652 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:04:24.645652 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:04:24.645652 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:04:24.645652 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:04:24.645652 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:04:24.645652 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:04:24.645652 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:04:24.645652 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:04:24.645652 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:04:24.645652 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:04:24.676382 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:04:24.676382 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:04:24.676382 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:04:24.676382 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:04:24.676382 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 29 11:04:24.873642 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:04:25.102561 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:04:25.102561 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:04:25.105530 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:04:25.105530 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:04:25.105530 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:04:25.105530 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:04:25.105530 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:04:25.105530 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:04:25.105530 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:04:25.105530 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:04:25.130142 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:04:25.133975 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:04:25.135062 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:04:25.135062 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:04:25.135062 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:04:25.135062 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:04:25.135062 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:04:25.135062 ignition[943]: INFO : files: files passed Jan 29 11:04:25.135062 ignition[943]: INFO : Ignition finished successfully Jan 29 11:04:25.138042 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:04:25.145917 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:04:25.147518 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:04:25.150064 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:04:25.150153 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:04:25.155425 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:04:25.158877 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:04:25.158877 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:04:25.161100 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:04:25.162175 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:04:25.163519 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:04:25.177031 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:04:25.199129 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:04:25.199232 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:04:25.200711 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:04:25.201508 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:04:25.203047 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:04:25.203935 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:04:25.220748 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:04:25.232916 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:04:25.240983 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:04:25.241962 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:04:25.243441 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:04:25.244713 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:04:25.244849 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:04:25.246724 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:04:25.248337 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:04:25.249538 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:04:25.250787 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:04:25.252212 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:04:25.253597 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:04:25.254989 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:04:25.256422 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:04:25.257864 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:04:25.259252 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:04:25.260458 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:04:25.260593 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:04:25.262359 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:04:25.263808 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:04:25.265247 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:04:25.266724 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:04:25.267664 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:04:25.267814 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:04:25.269986 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:04:25.270102 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:04:25.271623 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:04:25.272791 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:04:25.276751 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:04:25.277783 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:04:25.279406 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:04:25.280564 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:04:25.280674 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:04:25.281862 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:04:25.281947 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:04:25.283079 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:04:25.283193 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:04:25.284533 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:04:25.284636 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:04:25.294889 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:04:25.295608 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:04:25.295773 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:04:25.298099 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:04:25.299358 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:04:25.299485 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:04:25.301186 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:04:25.301372 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:04:25.307258 ignition[998]: INFO : Ignition 2.20.0 Jan 29 11:04:25.307258 ignition[998]: INFO : Stage: umount Jan 29 11:04:25.309360 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:04:25.309360 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:04:25.309360 ignition[998]: INFO : umount: umount passed Jan 29 11:04:25.309360 ignition[998]: INFO : Ignition finished successfully Jan 29 11:04:25.308749 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:04:25.308850 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:04:25.310771 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:04:25.311254 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:04:25.311348 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:04:25.312925 systemd[1]: Stopped target network.target - Network. Jan 29 11:04:25.313750 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:04:25.313827 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:04:25.315600 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:04:25.315644 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:04:25.316960 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:04:25.317001 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:04:25.318582 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:04:25.318625 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:04:25.320413 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:04:25.321660 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:04:25.327738 systemd-networkd[763]: eth0: DHCPv6 lease lost Jan 29 11:04:25.329882 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:04:25.331767 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:04:25.333190 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:04:25.333279 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:04:25.335768 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:04:25.335811 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:04:25.351868 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:04:25.352537 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:04:25.352597 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:04:25.354108 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:04:25.354148 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:04:25.355494 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:04:25.355536 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:04:25.357143 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:04:25.357182 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:04:25.358762 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:04:25.368157 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:04:25.368266 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:04:25.385185 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:04:25.385325 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:04:25.387235 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:04:25.387316 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:04:25.390745 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:04:25.390812 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:04:25.392331 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:04:25.392381 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:04:25.393770 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:04:25.393824 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:04:25.397385 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:04:25.397485 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:04:25.401074 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:04:25.401132 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:04:25.403403 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:04:25.403533 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:04:25.414904 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:04:25.415793 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:04:25.415852 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:04:25.417548 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:04:25.417584 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:04:25.420282 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:04:25.420373 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:04:25.421934 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:04:25.424019 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:04:25.434309 systemd[1]: Switching root. Jan 29 11:04:25.464676 systemd-journald[239]: Journal stopped Jan 29 11:04:26.179516 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 29 11:04:26.179580 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:04:26.179593 kernel: SELinux: policy capability open_perms=1 Jan 29 11:04:26.179603 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:04:26.179612 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:04:26.179621 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:04:26.179631 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:04:26.179640 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:04:26.179649 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:04:26.179707 kernel: audit: type=1403 audit(1738148665.622:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:04:26.179722 systemd[1]: Successfully loaded SELinux policy in 30.210ms. Jan 29 11:04:26.179743 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.694ms. Jan 29 11:04:26.179754 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:04:26.179765 systemd[1]: Detected virtualization kvm. Jan 29 11:04:26.179776 systemd[1]: Detected architecture arm64. Jan 29 11:04:26.179786 systemd[1]: Detected first boot. Jan 29 11:04:26.179797 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:04:26.179807 zram_generator::config[1044]: No configuration found. Jan 29 11:04:26.179820 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:04:26.179831 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:04:26.179841 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:04:26.179852 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:04:26.179862 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:04:26.179873 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:04:26.179883 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:04:26.179893 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:04:26.179906 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:04:26.179917 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:04:26.179928 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:04:26.179942 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:04:26.179953 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:04:26.179979 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:04:26.179991 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:04:26.180001 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:04:26.180012 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:04:26.180024 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:04:26.180034 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:04:26.180045 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:04:26.180056 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:04:26.180067 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:04:26.180077 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:04:26.180088 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:04:26.180100 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:04:26.180111 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:04:26.180122 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:04:26.180132 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:04:26.180143 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:04:26.180154 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:04:26.180165 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:04:26.180175 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:04:26.180186 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:04:26.180196 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:04:26.180208 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:04:26.180218 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:04:26.180228 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:04:26.180238 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:04:26.180248 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:04:26.180259 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:04:26.180269 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:04:26.180280 systemd[1]: Reached target machines.target - Containers. Jan 29 11:04:26.180291 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:04:26.180301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:04:26.180312 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:04:26.180322 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:04:26.180332 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:04:26.180343 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:04:26.180353 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:04:26.180364 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:04:26.180374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:04:26.180387 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:04:26.180398 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:04:26.180409 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:04:26.180421 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:04:26.180431 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:04:26.180442 kernel: fuse: init (API version 7.39) Jan 29 11:04:26.180451 kernel: loop: module loaded Jan 29 11:04:26.180461 kernel: ACPI: bus type drm_connector registered Jan 29 11:04:26.180471 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:04:26.180483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:04:26.180494 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:04:26.180505 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:04:26.180516 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:04:26.180546 systemd-journald[1115]: Collecting audit messages is disabled. Jan 29 11:04:26.180568 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:04:26.180578 systemd[1]: Stopped verity-setup.service. Jan 29 11:04:26.180591 systemd-journald[1115]: Journal started Jan 29 11:04:26.180613 systemd-journald[1115]: Runtime Journal (/run/log/journal/67885c338a344764807f5a96519e0261) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:04:25.990828 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:04:26.012712 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:04:26.013077 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:04:26.183726 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:04:26.184232 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:04:26.185239 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:04:26.186261 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:04:26.187246 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:04:26.188182 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:04:26.189147 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:04:26.190749 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:04:26.192004 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:04:26.193394 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:04:26.193630 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:04:26.194950 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:04:26.195204 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:04:26.196526 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:04:26.196693 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:04:26.197839 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:04:26.197981 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:04:26.199126 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:04:26.199269 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:04:26.200569 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:04:26.200741 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:04:26.201841 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:04:26.202920 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:04:26.204288 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:04:26.217405 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:04:26.227810 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:04:26.232889 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:04:26.233814 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:04:26.233850 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:04:26.235730 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:04:26.237830 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:04:26.239947 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:04:26.240945 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:04:26.242612 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:04:26.244934 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:04:26.245946 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:04:26.249889 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:04:26.250895 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:04:26.253293 systemd-journald[1115]: Time spent on flushing to /var/log/journal/67885c338a344764807f5a96519e0261 is 24.486ms for 858 entries. Jan 29 11:04:26.253293 systemd-journald[1115]: System Journal (/var/log/journal/67885c338a344764807f5a96519e0261) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:04:26.299355 systemd-journald[1115]: Received client request to flush runtime journal. Jan 29 11:04:26.299466 kernel: loop0: detected capacity change from 0 to 116808 Jan 29 11:04:26.299491 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:04:26.253986 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:04:26.258464 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:04:26.263906 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:04:26.266247 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:04:26.269040 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:04:26.270061 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:04:26.271782 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:04:26.273747 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:04:26.281078 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:04:26.294983 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:04:26.299727 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:04:26.305760 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:04:26.307182 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:04:26.313510 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:04:26.315751 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:04:26.320227 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:04:26.327105 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:04:26.328527 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:04:26.330707 kernel: loop1: detected capacity change from 0 to 189592 Jan 29 11:04:26.345911 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 29 11:04:26.345931 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 29 11:04:26.350520 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:04:26.370718 kernel: loop2: detected capacity change from 0 to 113536 Jan 29 11:04:26.401712 kernel: loop3: detected capacity change from 0 to 116808 Jan 29 11:04:26.407709 kernel: loop4: detected capacity change from 0 to 189592 Jan 29 11:04:26.412713 kernel: loop5: detected capacity change from 0 to 113536 Jan 29 11:04:26.416082 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:04:26.416492 (sd-merge)[1180]: Merged extensions into '/usr'. Jan 29 11:04:26.420474 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:04:26.420890 systemd[1]: Reloading... Jan 29 11:04:26.478731 zram_generator::config[1208]: No configuration found. Jan 29 11:04:26.539413 ldconfig[1150]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:04:26.579454 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:04:26.615247 systemd[1]: Reloading finished in 193 ms. Jan 29 11:04:26.650776 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:04:26.651936 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:04:26.666870 systemd[1]: Starting ensure-sysext.service... Jan 29 11:04:26.668807 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:04:26.677849 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:04:26.677870 systemd[1]: Reloading... Jan 29 11:04:26.689136 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:04:26.689386 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:04:26.690063 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:04:26.690272 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 29 11:04:26.690315 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 29 11:04:26.693158 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:04:26.693171 systemd-tmpfiles[1242]: Skipping /boot Jan 29 11:04:26.700441 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:04:26.701954 systemd-tmpfiles[1242]: Skipping /boot Jan 29 11:04:26.727489 zram_generator::config[1272]: No configuration found. Jan 29 11:04:26.808621 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:04:26.844612 systemd[1]: Reloading finished in 166 ms. Jan 29 11:04:26.861958 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:04:26.870206 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:04:26.878205 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:04:26.880679 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:04:26.883999 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:04:26.890044 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:04:26.893983 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:04:26.896698 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:04:26.899995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:04:26.903083 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:04:26.906090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:04:26.909231 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:04:26.910281 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:04:26.914939 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:04:26.922834 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:04:26.924558 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:04:26.924729 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:04:26.926067 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:04:26.926200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:04:26.928073 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:04:26.928214 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:04:26.936647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:04:26.938808 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Jan 29 11:04:26.939041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:04:26.942106 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:04:26.946093 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:04:26.947530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:04:26.949411 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:04:26.953723 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:04:26.955486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:04:26.955925 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:04:26.958223 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:04:26.965872 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:04:26.972434 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:04:26.981657 systemd[1]: Finished ensure-sysext.service. Jan 29 11:04:26.989735 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:04:26.996007 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:04:26.999233 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:04:27.002891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:04:27.005254 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:04:27.011343 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:04:27.013022 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:04:27.013570 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:04:27.015309 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:04:27.016806 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:04:27.017941 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:04:27.018080 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:04:27.023272 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:04:27.023925 augenrules[1379]: No rules Jan 29 11:04:27.026314 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:04:27.026701 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1340) Jan 29 11:04:27.027721 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:04:27.030952 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:04:27.031122 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:04:27.044601 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:04:27.051009 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:04:27.051090 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:04:27.051113 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:04:27.073156 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:04:27.087931 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:04:27.092419 systemd-resolved[1309]: Positive Trust Anchors: Jan 29 11:04:27.096303 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:04:27.096340 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:04:27.100341 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:04:27.101621 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:04:27.111543 systemd-resolved[1309]: Defaulting to hostname 'linux'. Jan 29 11:04:27.113266 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:04:27.114286 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:04:27.117361 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:04:27.121086 systemd-networkd[1372]: lo: Link UP Jan 29 11:04:27.121097 systemd-networkd[1372]: lo: Gained carrier Jan 29 11:04:27.121940 systemd-networkd[1372]: Enumeration completed Jan 29 11:04:27.122046 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:04:27.123853 systemd[1]: Reached target network.target - Network. Jan 29 11:04:27.126820 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:04:27.126825 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:04:27.127511 systemd-networkd[1372]: eth0: Link UP Jan 29 11:04:27.127517 systemd-networkd[1372]: eth0: Gained carrier Jan 29 11:04:27.127532 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:04:27.132050 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:04:27.146777 systemd-networkd[1372]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:04:27.147517 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. Jan 29 11:04:27.148905 systemd-timesyncd[1373]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:04:27.148968 systemd-timesyncd[1373]: Initial clock synchronization to Wed 2025-01-29 11:04:26.767637 UTC. Jan 29 11:04:27.164981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:04:27.173140 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:04:27.176343 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:04:27.201456 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:04:27.212814 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:04:27.223613 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:04:27.225345 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:04:27.226303 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:04:27.227185 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:04:27.228098 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:04:27.229177 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:04:27.230106 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:04:27.231241 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:04:27.232152 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:04:27.232186 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:04:27.232841 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:04:27.234341 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:04:27.236613 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:04:27.253653 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:04:27.255859 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:04:27.257254 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:04:27.258207 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:04:27.258914 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:04:27.259612 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:04:27.259641 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:04:27.260765 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:04:27.262614 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:04:27.264620 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:04:27.266341 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:04:27.272964 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:04:27.274126 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:04:27.277539 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:04:27.281036 jq[1414]: false Jan 29 11:04:27.282501 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:04:27.284760 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:04:27.288865 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:04:27.292709 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:04:27.294902 extend-filesystems[1415]: Found loop3 Jan 29 11:04:27.295858 extend-filesystems[1415]: Found loop4 Jan 29 11:04:27.295858 extend-filesystems[1415]: Found loop5 Jan 29 11:04:27.295858 extend-filesystems[1415]: Found vda Jan 29 11:04:27.295858 extend-filesystems[1415]: Found vda1 Jan 29 11:04:27.295858 extend-filesystems[1415]: Found vda2 Jan 29 11:04:27.295858 extend-filesystems[1415]: Found vda3 Jan 29 11:04:27.295858 extend-filesystems[1415]: Found usr Jan 29 11:04:27.295858 extend-filesystems[1415]: Found vda4 Jan 29 11:04:27.295858 extend-filesystems[1415]: Found vda6 Jan 29 11:04:27.295858 extend-filesystems[1415]: Found vda7 Jan 29 11:04:27.295858 extend-filesystems[1415]: Found vda9 Jan 29 11:04:27.295858 extend-filesystems[1415]: Checking size of /dev/vda9 Jan 29 11:04:27.310884 extend-filesystems[1415]: Resized partition /dev/vda9 Jan 29 11:04:27.318404 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:04:27.297093 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:04:27.298070 dbus-daemon[1413]: [system] SELinux support is enabled Jan 29 11:04:27.320606 extend-filesystems[1433]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:04:27.297576 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:04:27.306025 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:04:27.311583 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:04:27.312970 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:04:27.318761 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:04:27.321382 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:04:27.321534 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:04:27.321828 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:04:27.321964 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:04:27.326830 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:04:27.346219 jq[1432]: true Jan 29 11:04:27.326985 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:04:27.340941 (ntainerd)[1442]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:04:27.347801 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:04:27.347836 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:04:27.351669 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:04:27.356355 jq[1443]: true Jan 29 11:04:27.351738 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:04:27.357996 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1361) Jan 29 11:04:27.358762 update_engine[1425]: I20250129 11:04:27.358546 1425 main.cc:92] Flatcar Update Engine starting Jan 29 11:04:27.368818 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:04:27.368949 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:04:27.378947 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:04:27.380228 extend-filesystems[1433]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:04:27.380228 extend-filesystems[1433]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:04:27.380228 extend-filesystems[1433]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:04:27.389557 update_engine[1425]: I20250129 11:04:27.369489 1425 update_check_scheduler.cc:74] Next update check in 4m3s Jan 29 11:04:27.381447 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:04:27.389667 tar[1437]: linux-arm64/helm Jan 29 11:04:27.397494 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Jan 29 11:04:27.383786 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:04:27.407136 bash[1469]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:04:27.410437 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:04:27.412095 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:04:27.413625 systemd-logind[1421]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:04:27.417014 systemd-logind[1421]: New seat seat0. Jan 29 11:04:27.418417 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:04:27.481466 locksmithd[1453]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:04:27.575693 containerd[1442]: time="2025-01-29T11:04:27.573098040Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:04:27.600048 containerd[1442]: time="2025-01-29T11:04:27.599949320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:04:27.601467 containerd[1442]: time="2025-01-29T11:04:27.601424080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:04:27.601499 containerd[1442]: time="2025-01-29T11:04:27.601467480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:04:27.601499 containerd[1442]: time="2025-01-29T11:04:27.601489280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:04:27.601680 containerd[1442]: time="2025-01-29T11:04:27.601652480Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:04:27.601717 containerd[1442]: time="2025-01-29T11:04:27.601706280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:04:27.601801 containerd[1442]: time="2025-01-29T11:04:27.601778000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:04:27.601826 containerd[1442]: time="2025-01-29T11:04:27.601798960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:04:27.602000 containerd[1442]: time="2025-01-29T11:04:27.601977520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:04:27.602029 containerd[1442]: time="2025-01-29T11:04:27.602001400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:04:27.602029 containerd[1442]: time="2025-01-29T11:04:27.602015040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:04:27.602029 containerd[1442]: time="2025-01-29T11:04:27.602025080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:04:27.602120 containerd[1442]: time="2025-01-29T11:04:27.602102400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:04:27.602327 containerd[1442]: time="2025-01-29T11:04:27.602305040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:04:27.602433 containerd[1442]: time="2025-01-29T11:04:27.602411040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:04:27.602433 containerd[1442]: time="2025-01-29T11:04:27.602429600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:04:27.602521 containerd[1442]: time="2025-01-29T11:04:27.602504600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:04:27.602567 containerd[1442]: time="2025-01-29T11:04:27.602551840Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:04:27.605614 containerd[1442]: time="2025-01-29T11:04:27.605578520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:04:27.605649 containerd[1442]: time="2025-01-29T11:04:27.605642240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:04:27.605668 containerd[1442]: time="2025-01-29T11:04:27.605658720Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:04:27.605714 containerd[1442]: time="2025-01-29T11:04:27.605696000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:04:27.605738 containerd[1442]: time="2025-01-29T11:04:27.605730440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:04:27.605904 containerd[1442]: time="2025-01-29T11:04:27.605883160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:04:27.606155 containerd[1442]: time="2025-01-29T11:04:27.606135760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:04:27.606254 containerd[1442]: time="2025-01-29T11:04:27.606237440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:04:27.606289 containerd[1442]: time="2025-01-29T11:04:27.606258920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:04:27.606289 containerd[1442]: time="2025-01-29T11:04:27.606273680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:04:27.606289 containerd[1442]: time="2025-01-29T11:04:27.606286960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:04:27.606342 containerd[1442]: time="2025-01-29T11:04:27.606300440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:04:27.606342 containerd[1442]: time="2025-01-29T11:04:27.606312960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:04:27.606342 containerd[1442]: time="2025-01-29T11:04:27.606327280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:04:27.606434 containerd[1442]: time="2025-01-29T11:04:27.606342040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:04:27.606434 containerd[1442]: time="2025-01-29T11:04:27.606355200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:04:27.606434 containerd[1442]: time="2025-01-29T11:04:27.606367400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:04:27.606434 containerd[1442]: time="2025-01-29T11:04:27.606379400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:04:27.606434 containerd[1442]: time="2025-01-29T11:04:27.606400000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606434 containerd[1442]: time="2025-01-29T11:04:27.606413560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606434 containerd[1442]: time="2025-01-29T11:04:27.606425400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606434 containerd[1442]: time="2025-01-29T11:04:27.606437240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606568 containerd[1442]: time="2025-01-29T11:04:27.606449240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606568 containerd[1442]: time="2025-01-29T11:04:27.606462280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606568 containerd[1442]: time="2025-01-29T11:04:27.606473400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606568 containerd[1442]: time="2025-01-29T11:04:27.606486360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606568 containerd[1442]: time="2025-01-29T11:04:27.606498720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606568 containerd[1442]: time="2025-01-29T11:04:27.606513680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606568 containerd[1442]: time="2025-01-29T11:04:27.606525200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606568 containerd[1442]: time="2025-01-29T11:04:27.606541400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606568 containerd[1442]: time="2025-01-29T11:04:27.606553640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606568 containerd[1442]: time="2025-01-29T11:04:27.606568360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:04:27.606745 containerd[1442]: time="2025-01-29T11:04:27.606589280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606745 containerd[1442]: time="2025-01-29T11:04:27.606602760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.606745 containerd[1442]: time="2025-01-29T11:04:27.606615120Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:04:27.606909 containerd[1442]: time="2025-01-29T11:04:27.606889800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:04:27.606934 containerd[1442]: time="2025-01-29T11:04:27.606915280Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:04:27.606934 containerd[1442]: time="2025-01-29T11:04:27.606926480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:04:27.607008 containerd[1442]: time="2025-01-29T11:04:27.606990760Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:04:27.607008 containerd[1442]: time="2025-01-29T11:04:27.607005360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.607047 containerd[1442]: time="2025-01-29T11:04:27.607019600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:04:27.607047 containerd[1442]: time="2025-01-29T11:04:27.607030080Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:04:27.607047 containerd[1442]: time="2025-01-29T11:04:27.607041760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:04:27.607426 containerd[1442]: time="2025-01-29T11:04:27.607375960Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:04:27.607426 containerd[1442]: time="2025-01-29T11:04:27.607429120Z" level=info msg="Connect containerd service" Jan 29 11:04:27.607550 containerd[1442]: time="2025-01-29T11:04:27.607461480Z" level=info msg="using legacy CRI server" Jan 29 11:04:27.607550 containerd[1442]: time="2025-01-29T11:04:27.607468520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:04:27.607817 containerd[1442]: time="2025-01-29T11:04:27.607795080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:04:27.608540 containerd[1442]: time="2025-01-29T11:04:27.608511760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:04:27.609151 containerd[1442]: time="2025-01-29T11:04:27.609099160Z" level=info msg="Start subscribing containerd event" Jan 29 11:04:27.609282 containerd[1442]: time="2025-01-29T11:04:27.609256480Z" level=info msg="Start recovering state" Jan 29 11:04:27.609375 containerd[1442]: time="2025-01-29T11:04:27.609353360Z" level=info msg="Start event monitor" Jan 29 11:04:27.609444 containerd[1442]: time="2025-01-29T11:04:27.609425960Z" level=info msg="Start snapshots syncer" Jan 29 11:04:27.609477 containerd[1442]: time="2025-01-29T11:04:27.609447040Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:04:27.609477 containerd[1442]: time="2025-01-29T11:04:27.609462680Z" level=info msg="Start streaming server" Jan 29 11:04:27.610412 containerd[1442]: time="2025-01-29T11:04:27.610384160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:04:27.610464 containerd[1442]: time="2025-01-29T11:04:27.610449200Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:04:27.610634 containerd[1442]: time="2025-01-29T11:04:27.610612520Z" level=info msg="containerd successfully booted in 0.038458s" Jan 29 11:04:27.610729 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:04:27.723885 tar[1437]: linux-arm64/LICENSE Jan 29 11:04:27.723885 tar[1437]: linux-arm64/README.md Jan 29 11:04:27.736157 sshd_keygen[1444]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:04:27.736426 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:04:27.756765 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:04:27.767076 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:04:27.772847 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:04:27.773074 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:04:27.777789 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:04:27.792446 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:04:27.795134 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:04:27.797309 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:04:27.798420 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:04:29.085891 systemd-networkd[1372]: eth0: Gained IPv6LL Jan 29 11:04:29.088353 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:04:29.089943 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:04:29.099968 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:04:29.102251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:29.104219 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:04:29.120468 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:04:29.121102 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:04:29.122909 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:04:29.124490 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:04:29.575730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:29.576990 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:04:29.580324 (kubelet)[1527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:29.582864 systemd[1]: Startup finished in 576ms (kernel) + 4.914s (initrd) + 3.997s (userspace) = 9.488s. Jan 29 11:04:29.999414 kubelet[1527]: E0129 11:04:29.999299 1527 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:30.001974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:30.002136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:33.072577 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:04:33.074052 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:59580.service - OpenSSH per-connection server daemon (10.0.0.1:59580). Jan 29 11:04:33.136057 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 59580 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:04:33.137843 sshd-session[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:33.147957 systemd-logind[1421]: New session 1 of user core. Jan 29 11:04:33.149066 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:04:33.163017 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:04:33.173756 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:04:33.176241 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:04:33.183453 (systemd)[1545]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:04:33.256423 systemd[1545]: Queued start job for default target default.target. Jan 29 11:04:33.264663 systemd[1545]: Created slice app.slice - User Application Slice. Jan 29 11:04:33.264728 systemd[1545]: Reached target paths.target - Paths. Jan 29 11:04:33.264741 systemd[1545]: Reached target timers.target - Timers. Jan 29 11:04:33.266052 systemd[1545]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:04:33.276500 systemd[1545]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:04:33.276576 systemd[1545]: Reached target sockets.target - Sockets. Jan 29 11:04:33.276601 systemd[1545]: Reached target basic.target - Basic System. Jan 29 11:04:33.276642 systemd[1545]: Reached target default.target - Main User Target. Jan 29 11:04:33.276677 systemd[1545]: Startup finished in 87ms. Jan 29 11:04:33.276935 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:04:33.278327 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:04:33.336911 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:59584.service - OpenSSH per-connection server daemon (10.0.0.1:59584). Jan 29 11:04:33.381011 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 59584 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:04:33.382297 sshd-session[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:33.386306 systemd-logind[1421]: New session 2 of user core. Jan 29 11:04:33.396887 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:04:33.447961 sshd[1558]: Connection closed by 10.0.0.1 port 59584 Jan 29 11:04:33.448317 sshd-session[1556]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:33.458234 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:59584.service: Deactivated successfully. Jan 29 11:04:33.459925 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:04:33.461911 systemd-logind[1421]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:04:33.463533 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:59588.service - OpenSSH per-connection server daemon (10.0.0.1:59588). Jan 29 11:04:33.464426 systemd-logind[1421]: Removed session 2. Jan 29 11:04:33.510673 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 59588 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:04:33.511981 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:33.516164 systemd-logind[1421]: New session 3 of user core. Jan 29 11:04:33.528911 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:04:33.575603 sshd[1565]: Connection closed by 10.0.0.1 port 59588 Jan 29 11:04:33.575950 sshd-session[1563]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:33.594219 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:59588.service: Deactivated successfully. Jan 29 11:04:33.597891 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:04:33.599292 systemd-logind[1421]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:04:33.600547 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:59594.service - OpenSSH per-connection server daemon (10.0.0.1:59594). Jan 29 11:04:33.601355 systemd-logind[1421]: Removed session 3. Jan 29 11:04:33.656104 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 59594 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:04:33.657373 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:33.661518 systemd-logind[1421]: New session 4 of user core. Jan 29 11:04:33.671854 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:04:33.722889 sshd[1572]: Connection closed by 10.0.0.1 port 59594 Jan 29 11:04:33.723385 sshd-session[1570]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:33.733258 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:59594.service: Deactivated successfully. Jan 29 11:04:33.734972 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:04:33.736853 systemd-logind[1421]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:04:33.738101 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:59604.service - OpenSSH per-connection server daemon (10.0.0.1:59604). Jan 29 11:04:33.738863 systemd-logind[1421]: Removed session 4. Jan 29 11:04:33.781274 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 59604 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:04:33.782534 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:33.786627 systemd-logind[1421]: New session 5 of user core. Jan 29 11:04:33.797866 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:04:33.858869 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:04:33.859169 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:04:33.872701 sudo[1580]: pam_unix(sudo:session): session closed for user root Jan 29 11:04:33.874200 sshd[1579]: Connection closed by 10.0.0.1 port 59604 Jan 29 11:04:33.874762 sshd-session[1577]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:33.883315 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:59604.service: Deactivated successfully. Jan 29 11:04:33.884907 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:04:33.886413 systemd-logind[1421]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:04:33.887915 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:59612.service - OpenSSH per-connection server daemon (10.0.0.1:59612). Jan 29 11:04:33.888717 systemd-logind[1421]: Removed session 5. Jan 29 11:04:33.931852 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 59612 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:04:33.933156 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:33.937375 systemd-logind[1421]: New session 6 of user core. Jan 29 11:04:33.951865 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:04:34.002392 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:04:34.002705 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:04:34.005771 sudo[1589]: pam_unix(sudo:session): session closed for user root Jan 29 11:04:34.010592 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:04:34.010908 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:04:34.032135 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:04:34.055513 augenrules[1611]: No rules Jan 29 11:04:34.056762 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:04:34.057764 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:04:34.059039 sudo[1588]: pam_unix(sudo:session): session closed for user root Jan 29 11:04:34.060238 sshd[1587]: Connection closed by 10.0.0.1 port 59612 Jan 29 11:04:34.060841 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Jan 29 11:04:34.072362 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:59612.service: Deactivated successfully. Jan 29 11:04:34.074016 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:04:34.076947 systemd-logind[1421]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:04:34.091005 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:59626.service - OpenSSH per-connection server daemon (10.0.0.1:59626). Jan 29 11:04:34.092061 systemd-logind[1421]: Removed session 6. Jan 29 11:04:34.131424 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 59626 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:04:34.132566 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:04:34.136745 systemd-logind[1421]: New session 7 of user core. Jan 29 11:04:34.143934 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:04:34.193856 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:04:34.194157 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:04:34.507936 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:04:34.508023 (dockerd)[1643]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:04:34.787778 dockerd[1643]: time="2025-01-29T11:04:34.787633897Z" level=info msg="Starting up" Jan 29 11:04:34.925777 dockerd[1643]: time="2025-01-29T11:04:34.925732089Z" level=info msg="Loading containers: start." Jan 29 11:04:35.077696 kernel: Initializing XFRM netlink socket Jan 29 11:04:35.153691 systemd-networkd[1372]: docker0: Link UP Jan 29 11:04:35.194957 dockerd[1643]: time="2025-01-29T11:04:35.194886877Z" level=info msg="Loading containers: done." Jan 29 11:04:35.215883 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3021115914-merged.mount: Deactivated successfully. Jan 29 11:04:35.218484 dockerd[1643]: time="2025-01-29T11:04:35.218432651Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:04:35.218563 dockerd[1643]: time="2025-01-29T11:04:35.218549228Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:04:35.218677 dockerd[1643]: time="2025-01-29T11:04:35.218651478Z" level=info msg="Daemon has completed initialization" Jan 29 11:04:35.245714 dockerd[1643]: time="2025-01-29T11:04:35.245590513Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:04:35.245944 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:04:35.830038 containerd[1442]: time="2025-01-29T11:04:35.829987372Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:04:36.812458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4117749641.mount: Deactivated successfully. Jan 29 11:04:38.682374 containerd[1442]: time="2025-01-29T11:04:38.682304520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:38.682816 containerd[1442]: time="2025-01-29T11:04:38.682764758Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618072" Jan 29 11:04:38.683613 containerd[1442]: time="2025-01-29T11:04:38.683561801Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:38.687198 containerd[1442]: time="2025-01-29T11:04:38.687148790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:38.688821 containerd[1442]: time="2025-01-29T11:04:38.688773329Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 2.858738257s" Jan 29 11:04:38.688821 containerd[1442]: time="2025-01-29T11:04:38.688825704Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 29 11:04:38.689759 containerd[1442]: time="2025-01-29T11:04:38.689723626Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:04:40.252362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:04:40.264940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:40.356517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:40.360803 (kubelet)[1904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:40.407240 kubelet[1904]: E0129 11:04:40.407193 1904 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:40.410794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:40.410932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:40.593726 containerd[1442]: time="2025-01-29T11:04:40.593597957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:40.594631 containerd[1442]: time="2025-01-29T11:04:40.594431243Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469469" Jan 29 11:04:40.595505 containerd[1442]: time="2025-01-29T11:04:40.595277047Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:40.598772 containerd[1442]: time="2025-01-29T11:04:40.598737002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:40.600584 containerd[1442]: time="2025-01-29T11:04:40.600460819Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 1.910698931s" Jan 29 11:04:40.600584 containerd[1442]: time="2025-01-29T11:04:40.600504040Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 29 11:04:40.601073 containerd[1442]: time="2025-01-29T11:04:40.600903687Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:04:42.015631 containerd[1442]: time="2025-01-29T11:04:42.015451636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:42.016526 containerd[1442]: time="2025-01-29T11:04:42.016453262Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024219" Jan 29 11:04:42.017073 containerd[1442]: time="2025-01-29T11:04:42.017043612Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:42.022024 containerd[1442]: time="2025-01-29T11:04:42.021972609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:42.023188 containerd[1442]: time="2025-01-29T11:04:42.023150411Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.422216077s" Jan 29 11:04:42.023188 containerd[1442]: time="2025-01-29T11:04:42.023188211Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 29 11:04:42.023767 containerd[1442]: time="2025-01-29T11:04:42.023600400Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:04:43.674139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount596081704.mount: Deactivated successfully. Jan 29 11:04:43.925207 containerd[1442]: time="2025-01-29T11:04:43.925093811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:43.925982 containerd[1442]: time="2025-01-29T11:04:43.925497519Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772119" Jan 29 11:04:43.926344 containerd[1442]: time="2025-01-29T11:04:43.926305133Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:43.928584 containerd[1442]: time="2025-01-29T11:04:43.928556207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:43.929366 containerd[1442]: time="2025-01-29T11:04:43.929207354Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.905577999s" Jan 29 11:04:43.929366 containerd[1442]: time="2025-01-29T11:04:43.929239705Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 29 11:04:43.930091 containerd[1442]: time="2025-01-29T11:04:43.929701159Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:04:44.640379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2900265262.mount: Deactivated successfully. Jan 29 11:04:45.894730 containerd[1442]: time="2025-01-29T11:04:45.894652233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:45.895746 containerd[1442]: time="2025-01-29T11:04:45.895426921Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 29 11:04:45.896418 containerd[1442]: time="2025-01-29T11:04:45.896355886Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:45.899823 containerd[1442]: time="2025-01-29T11:04:45.899786794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:45.900789 containerd[1442]: time="2025-01-29T11:04:45.900659080Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.97092655s" Jan 29 11:04:45.900789 containerd[1442]: time="2025-01-29T11:04:45.900701550Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 11:04:45.901160 containerd[1442]: time="2025-01-29T11:04:45.901136956Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:04:46.459931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3517833582.mount: Deactivated successfully. Jan 29 11:04:46.465055 containerd[1442]: time="2025-01-29T11:04:46.463555108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:46.465055 containerd[1442]: time="2025-01-29T11:04:46.464205734Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 29 11:04:46.465055 containerd[1442]: time="2025-01-29T11:04:46.464790484Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:46.466829 containerd[1442]: time="2025-01-29T11:04:46.466792584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:46.467794 containerd[1442]: time="2025-01-29T11:04:46.467764301Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 566.598483ms" Jan 29 11:04:46.467794 containerd[1442]: time="2025-01-29T11:04:46.467790587Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 11:04:46.468360 containerd[1442]: time="2025-01-29T11:04:46.468179345Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:04:47.161890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131430374.mount: Deactivated successfully. Jan 29 11:04:49.629182 containerd[1442]: time="2025-01-29T11:04:49.629137305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:49.630276 containerd[1442]: time="2025-01-29T11:04:49.629600564Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Jan 29 11:04:49.630862 containerd[1442]: time="2025-01-29T11:04:49.630832599Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:49.638139 containerd[1442]: time="2025-01-29T11:04:49.638070096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:04:49.639529 containerd[1442]: time="2025-01-29T11:04:49.639485321Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.171270515s" Jan 29 11:04:49.639961 containerd[1442]: time="2025-01-29T11:04:49.639530112Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 29 11:04:50.661151 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:04:50.670850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:50.783015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:50.787051 (kubelet)[2058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:50.824882 kubelet[2058]: E0129 11:04:50.824822 2058 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:50.827521 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:50.827661 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:53.949846 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:53.962936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:53.982487 systemd[1]: Reloading requested from client PID 2074 ('systemctl') (unit session-7.scope)... Jan 29 11:04:53.982503 systemd[1]: Reloading... Jan 29 11:04:54.049715 zram_generator::config[2116]: No configuration found. Jan 29 11:04:54.192967 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:04:54.246383 systemd[1]: Reloading finished in 263 ms. Jan 29 11:04:54.293150 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:54.296015 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:04:54.296758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:54.298308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:54.390329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:54.394810 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:04:54.438239 kubelet[2160]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:04:54.438239 kubelet[2160]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:04:54.438239 kubelet[2160]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:04:54.440333 kubelet[2160]: I0129 11:04:54.438930 2160 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:04:54.916664 kubelet[2160]: I0129 11:04:54.916610 2160 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:04:54.916664 kubelet[2160]: I0129 11:04:54.916645 2160 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:04:54.916916 kubelet[2160]: I0129 11:04:54.916890 2160 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:04:54.951597 kubelet[2160]: E0129 11:04:54.951552 2160 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:54.952820 kubelet[2160]: I0129 11:04:54.952797 2160 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:04:54.961867 kubelet[2160]: E0129 11:04:54.961824 2160 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:04:54.961867 kubelet[2160]: I0129 11:04:54.961860 2160 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:04:54.965253 kubelet[2160]: I0129 11:04:54.965181 2160 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:04:54.966098 kubelet[2160]: I0129 11:04:54.966053 2160 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:04:54.966247 kubelet[2160]: I0129 11:04:54.966219 2160 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:04:54.966404 kubelet[2160]: I0129 11:04:54.966247 2160 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:04:54.966550 kubelet[2160]: I0129 11:04:54.966539 2160 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:04:54.966574 kubelet[2160]: I0129 11:04:54.966551 2160 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:04:54.966756 kubelet[2160]: I0129 11:04:54.966745 2160 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:04:54.969993 kubelet[2160]: I0129 11:04:54.969951 2160 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:04:54.969993 kubelet[2160]: I0129 11:04:54.969975 2160 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:04:54.969993 kubelet[2160]: I0129 11:04:54.969999 2160 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:04:54.970577 kubelet[2160]: I0129 11:04:54.970009 2160 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:04:54.974244 kubelet[2160]: W0129 11:04:54.974188 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jan 29 11:04:54.974328 kubelet[2160]: E0129 11:04:54.974250 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:54.974328 kubelet[2160]: W0129 11:04:54.974189 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jan 29 11:04:54.974328 kubelet[2160]: E0129 11:04:54.974317 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:54.974646 kubelet[2160]: I0129 11:04:54.974530 2160 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:04:54.976587 kubelet[2160]: I0129 11:04:54.976565 2160 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:04:54.977209 kubelet[2160]: W0129 11:04:54.977190 2160 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:04:54.980343 kubelet[2160]: I0129 11:04:54.980085 2160 server.go:1269] "Started kubelet" Jan 29 11:04:54.980343 kubelet[2160]: I0129 11:04:54.980252 2160 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:04:54.981225 kubelet[2160]: I0129 11:04:54.981208 2160 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:04:54.981282 kubelet[2160]: I0129 11:04:54.981269 2160 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:04:54.982342 kubelet[2160]: I0129 11:04:54.982199 2160 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:04:54.982435 kubelet[2160]: I0129 11:04:54.982415 2160 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:04:54.982664 kubelet[2160]: I0129 11:04:54.982638 2160 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:04:54.983934 kubelet[2160]: E0129 11:04:54.983835 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:04:54.984022 kubelet[2160]: I0129 11:04:54.983957 2160 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:04:54.984200 kubelet[2160]: I0129 11:04:54.984179 2160 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:04:54.984253 kubelet[2160]: I0129 11:04:54.984239 2160 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:04:54.984960 kubelet[2160]: W0129 11:04:54.984584 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jan 29 11:04:54.984960 kubelet[2160]: E0129 11:04:54.984905 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:54.985233 kubelet[2160]: I0129 11:04:54.985107 2160 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:04:54.985233 kubelet[2160]: I0129 11:04:54.985196 2160 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:04:54.986106 kubelet[2160]: E0129 11:04:54.986051 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" Jan 29 11:04:54.986820 kubelet[2160]: E0129 11:04:54.986796 2160 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:04:54.987223 kubelet[2160]: I0129 11:04:54.987177 2160 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:04:54.988188 kubelet[2160]: E0129 11:04:54.986195 2160 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f25076b183f05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:04:54.980050693 +0000 UTC m=+0.581595305,LastTimestamp:2025-01-29 11:04:54.980050693 +0000 UTC m=+0.581595305,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:04:54.995563 kubelet[2160]: I0129 11:04:54.995497 2160 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:04:54.996658 kubelet[2160]: I0129 11:04:54.996629 2160 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:04:54.996658 kubelet[2160]: I0129 11:04:54.996652 2160 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:04:54.996802 kubelet[2160]: I0129 11:04:54.996670 2160 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:04:54.996802 kubelet[2160]: E0129 11:04:54.996794 2160 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:04:55.000594 kubelet[2160]: W0129 11:04:55.000545 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jan 29 11:04:55.000710 kubelet[2160]: E0129 11:04:55.000617 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:55.000987 kubelet[2160]: I0129 11:04:55.000951 2160 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:04:55.000987 kubelet[2160]: I0129 11:04:55.000967 2160 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:04:55.001054 kubelet[2160]: I0129 11:04:55.000993 2160 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:04:55.084796 kubelet[2160]: E0129 11:04:55.084738 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:04:55.097074 kubelet[2160]: E0129 11:04:55.097051 2160 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:04:55.101601 kubelet[2160]: I0129 11:04:55.101481 2160 policy_none.go:49] "None policy: Start" Jan 29 11:04:55.102266 kubelet[2160]: I0129 11:04:55.102244 2160 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:04:55.102343 kubelet[2160]: I0129 11:04:55.102272 2160 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:04:55.110168 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:04:55.127677 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:04:55.131626 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:04:55.144807 kubelet[2160]: I0129 11:04:55.144758 2160 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:04:55.144991 kubelet[2160]: I0129 11:04:55.144974 2160 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:04:55.145027 kubelet[2160]: I0129 11:04:55.144991 2160 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:04:55.145605 kubelet[2160]: I0129 11:04:55.145212 2160 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:04:55.150786 kubelet[2160]: E0129 11:04:55.150752 2160 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:04:55.187964 kubelet[2160]: E0129 11:04:55.187841 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" Jan 29 11:04:55.247008 kubelet[2160]: I0129 11:04:55.246971 2160 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:04:55.247429 kubelet[2160]: E0129 11:04:55.247375 2160 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jan 29 11:04:55.305460 systemd[1]: Created slice kubepods-burstable-podf1755880e646c39db872e8a1b0f601d1.slice - libcontainer container kubepods-burstable-podf1755880e646c39db872e8a1b0f601d1.slice. Jan 29 11:04:55.326799 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 11:04:55.339969 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 11:04:55.387888 kubelet[2160]: I0129 11:04:55.387820 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:04:55.387888 kubelet[2160]: I0129 11:04:55.387868 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1755880e646c39db872e8a1b0f601d1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1755880e646c39db872e8a1b0f601d1\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:04:55.387888 kubelet[2160]: I0129 11:04:55.387887 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1755880e646c39db872e8a1b0f601d1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1755880e646c39db872e8a1b0f601d1\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:04:55.388066 kubelet[2160]: I0129 11:04:55.387904 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1755880e646c39db872e8a1b0f601d1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f1755880e646c39db872e8a1b0f601d1\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:04:55.388066 kubelet[2160]: I0129 11:04:55.387919 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:04:55.388066 kubelet[2160]: I0129 11:04:55.387941 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:04:55.388066 kubelet[2160]: I0129 11:04:55.387959 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:04:55.388066 kubelet[2160]: I0129 11:04:55.387974 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:04:55.388167 kubelet[2160]: I0129 11:04:55.387990 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:04:55.448557 kubelet[2160]: I0129 11:04:55.448439 2160 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:04:55.448885 kubelet[2160]: E0129 11:04:55.448736 2160 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jan 29 11:04:55.588598 kubelet[2160]: E0129 11:04:55.588514 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" Jan 29 11:04:55.624968 kubelet[2160]: E0129 11:04:55.624903 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:04:55.625630 containerd[1442]: time="2025-01-29T11:04:55.625580638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f1755880e646c39db872e8a1b0f601d1,Namespace:kube-system,Attempt:0,}" Jan 29 11:04:55.638068 kubelet[2160]: E0129 11:04:55.638018 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:04:55.638543 containerd[1442]: time="2025-01-29T11:04:55.638500127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 11:04:55.642342 kubelet[2160]: E0129 11:04:55.642309 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:04:55.643007 containerd[1442]: time="2025-01-29T11:04:55.642757128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 11:04:55.822606 kubelet[2160]: W0129 11:04:55.822470 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jan 29 11:04:55.822606 kubelet[2160]: E0129 11:04:55.822543 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:55.825286 kubelet[2160]: W0129 11:04:55.825246 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jan 29 11:04:55.825345 kubelet[2160]: E0129 11:04:55.825291 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:55.837958 kubelet[2160]: W0129 11:04:55.837916 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jan 29 11:04:55.838019 kubelet[2160]: E0129 11:04:55.837961 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:55.850241 kubelet[2160]: I0129 11:04:55.850210 2160 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:04:55.850524 kubelet[2160]: E0129 11:04:55.850502 2160 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jan 29 11:04:56.175810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3365127772.mount: Deactivated successfully. Jan 29 11:04:56.180226 containerd[1442]: time="2025-01-29T11:04:56.180175068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:04:56.181853 containerd[1442]: time="2025-01-29T11:04:56.181791555Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 11:04:56.183760 containerd[1442]: time="2025-01-29T11:04:56.183671702Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:04:56.185362 containerd[1442]: time="2025-01-29T11:04:56.185270688Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:04:56.186813 containerd[1442]: time="2025-01-29T11:04:56.186773024Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:04:56.187396 containerd[1442]: time="2025-01-29T11:04:56.187190351Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:04:56.188096 containerd[1442]: time="2025-01-29T11:04:56.188034673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:04:56.188808 containerd[1442]: time="2025-01-29T11:04:56.188753777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:04:56.189710 containerd[1442]: time="2025-01-29T11:04:56.189653397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.981635ms" Jan 29 11:04:56.194567 containerd[1442]: time="2025-01-29T11:04:56.194325896Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.606771ms" Jan 29 11:04:56.196700 containerd[1442]: time="2025-01-29T11:04:56.196642589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 553.812833ms" Jan 29 11:04:56.346393 containerd[1442]: time="2025-01-29T11:04:56.346292196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:04:56.346393 containerd[1442]: time="2025-01-29T11:04:56.346366352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:04:56.346676 containerd[1442]: time="2025-01-29T11:04:56.346384691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:56.346676 containerd[1442]: time="2025-01-29T11:04:56.346465799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:56.346822 containerd[1442]: time="2025-01-29T11:04:56.346426724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:04:56.346822 containerd[1442]: time="2025-01-29T11:04:56.346495646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:04:56.346822 containerd[1442]: time="2025-01-29T11:04:56.346511827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:56.346822 containerd[1442]: time="2025-01-29T11:04:56.346716235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:56.348080 containerd[1442]: time="2025-01-29T11:04:56.347477212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:04:56.348080 containerd[1442]: time="2025-01-29T11:04:56.347524758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:04:56.348080 containerd[1442]: time="2025-01-29T11:04:56.347541259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:56.348080 containerd[1442]: time="2025-01-29T11:04:56.347606785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:04:56.366853 systemd[1]: Started cri-containerd-1edb598862a5486cc7d694d83b4274ef78d61c8779e42d08c4423e7be187a75f.scope - libcontainer container 1edb598862a5486cc7d694d83b4274ef78d61c8779e42d08c4423e7be187a75f. Jan 29 11:04:56.367858 systemd[1]: Started cri-containerd-3488032bc417ef71c2df597a74cc07133dfe74a229ffcdb43468527080840482.scope - libcontainer container 3488032bc417ef71c2df597a74cc07133dfe74a229ffcdb43468527080840482. Jan 29 11:04:56.369032 systemd[1]: Started cri-containerd-69e6cb9b960f585883e40b6a28d765dc4e0b34783443d1b02c2f9b2872a4de69.scope - libcontainer container 69e6cb9b960f585883e40b6a28d765dc4e0b34783443d1b02c2f9b2872a4de69. Jan 29 11:04:56.388454 kubelet[2160]: W0129 11:04:56.388377 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jan 29 11:04:56.388563 kubelet[2160]: E0129 11:04:56.388460 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:04:56.390199 kubelet[2160]: E0129 11:04:56.390161 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="1.6s" Jan 29 11:04:56.402468 containerd[1442]: time="2025-01-29T11:04:56.402345334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f1755880e646c39db872e8a1b0f601d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"69e6cb9b960f585883e40b6a28d765dc4e0b34783443d1b02c2f9b2872a4de69\"" Jan 29 11:04:56.405038 kubelet[2160]: E0129 11:04:56.404838 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:04:56.405159 containerd[1442]: time="2025-01-29T11:04:56.404951577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3488032bc417ef71c2df597a74cc07133dfe74a229ffcdb43468527080840482\"" Jan 29 11:04:56.406055 kubelet[2160]: E0129 11:04:56.405984 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:04:56.407470 containerd[1442]: time="2025-01-29T11:04:56.407369235Z" level=info msg="CreateContainer within sandbox \"69e6cb9b960f585883e40b6a28d765dc4e0b34783443d1b02c2f9b2872a4de69\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:04:56.409474 containerd[1442]: time="2025-01-29T11:04:56.409434732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"1edb598862a5486cc7d694d83b4274ef78d61c8779e42d08c4423e7be187a75f\"" Jan 29 11:04:56.409552 containerd[1442]: time="2025-01-29T11:04:56.409521034Z" level=info msg="CreateContainer within sandbox \"3488032bc417ef71c2df597a74cc07133dfe74a229ffcdb43468527080840482\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:04:56.411233 kubelet[2160]: E0129 11:04:56.411195 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:04:56.413334 containerd[1442]: time="2025-01-29T11:04:56.413275295Z" level=info msg="CreateContainer within sandbox \"1edb598862a5486cc7d694d83b4274ef78d61c8779e42d08c4423e7be187a75f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:04:56.426402 containerd[1442]: time="2025-01-29T11:04:56.426267318Z" level=info msg="CreateContainer within sandbox \"69e6cb9b960f585883e40b6a28d765dc4e0b34783443d1b02c2f9b2872a4de69\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"52f23b0a97fa23682bbcdc841ab7dc6862ef913b2cc568a7e0d1bd631bb34999\"" Jan 29 11:04:56.427019 containerd[1442]: time="2025-01-29T11:04:56.426991457Z" level=info msg="StartContainer for \"52f23b0a97fa23682bbcdc841ab7dc6862ef913b2cc568a7e0d1bd631bb34999\"" Jan 29 11:04:56.430605 containerd[1442]: time="2025-01-29T11:04:56.430561487Z" level=info msg="CreateContainer within sandbox \"3488032bc417ef71c2df597a74cc07133dfe74a229ffcdb43468527080840482\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4370a2da13b971e109fd2e7a4b3de160389aa73a7f48d63978c9e5c291fde74e\"" Jan 29 11:04:56.430998 containerd[1442]: time="2025-01-29T11:04:56.430974858Z" level=info msg="StartContainer for \"4370a2da13b971e109fd2e7a4b3de160389aa73a7f48d63978c9e5c291fde74e\"" Jan 29 11:04:56.433561 containerd[1442]: time="2025-01-29T11:04:56.433467351Z" level=info msg="CreateContainer within sandbox \"1edb598862a5486cc7d694d83b4274ef78d61c8779e42d08c4423e7be187a75f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7ebed2e3e61b41156816a88362d0f07c9a305e660dda2466c15bcc72da76c718\"" Jan 29 11:04:56.434084 containerd[1442]: time="2025-01-29T11:04:56.434056323Z" level=info msg="StartContainer for \"7ebed2e3e61b41156816a88362d0f07c9a305e660dda2466c15bcc72da76c718\"" Jan 29 11:04:56.452851 systemd[1]: Started cri-containerd-52f23b0a97fa23682bbcdc841ab7dc6862ef913b2cc568a7e0d1bd631bb34999.scope - libcontainer container 52f23b0a97fa23682bbcdc841ab7dc6862ef913b2cc568a7e0d1bd631bb34999. Jan 29 11:04:56.456607 systemd[1]: Started cri-containerd-4370a2da13b971e109fd2e7a4b3de160389aa73a7f48d63978c9e5c291fde74e.scope - libcontainer container 4370a2da13b971e109fd2e7a4b3de160389aa73a7f48d63978c9e5c291fde74e. Jan 29 11:04:56.458003 systemd[1]: Started cri-containerd-7ebed2e3e61b41156816a88362d0f07c9a305e660dda2466c15bcc72da76c718.scope - libcontainer container 7ebed2e3e61b41156816a88362d0f07c9a305e660dda2466c15bcc72da76c718. Jan 29 11:04:56.491841 containerd[1442]: time="2025-01-29T11:04:56.491498165Z" level=info msg="StartContainer for \"4370a2da13b971e109fd2e7a4b3de160389aa73a7f48d63978c9e5c291fde74e\" returns successfully" Jan 29 11:04:56.491841 containerd[1442]: time="2025-01-29T11:04:56.491591818Z" level=info msg="StartContainer for \"52f23b0a97fa23682bbcdc841ab7dc6862ef913b2cc568a7e0d1bd631bb34999\" returns successfully" Jan 29 11:04:56.524576 containerd[1442]: time="2025-01-29T11:04:56.520906086Z" level=info msg="StartContainer for \"7ebed2e3e61b41156816a88362d0f07c9a305e660dda2466c15bcc72da76c718\" returns successfully" Jan 29 11:04:56.652723 kubelet[2160]: I0129 11:04:56.652674 2160 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:04:56.653236 kubelet[2160]: E0129 11:04:56.653158 2160 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jan 29 11:04:57.009554 kubelet[2160]: E0129 11:04:57.009508 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:04:57.011003 kubelet[2160]: E0129 11:04:57.010973 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:04:57.013150 kubelet[2160]: E0129 11:04:57.013125 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:04:58.017027 kubelet[2160]: E0129 11:04:58.016987 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:04:58.017331 kubelet[2160]: E0129 11:04:58.017078 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:04:58.155427 kubelet[2160]: E0129 11:04:58.155380 2160 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:04:58.255563 kubelet[2160]: I0129 11:04:58.255522 2160 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:04:58.265065 kubelet[2160]: I0129 11:04:58.265018 2160 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:04:58.265065 kubelet[2160]: E0129 11:04:58.265063 2160 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 11:04:58.273477 kubelet[2160]: E0129 11:04:58.273153 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:04:58.373648 kubelet[2160]: E0129 11:04:58.373603 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:04:58.474296 kubelet[2160]: E0129 11:04:58.474224 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:04:58.575219 kubelet[2160]: E0129 11:04:58.575102 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:04:58.675612 kubelet[2160]: E0129 11:04:58.675575 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:04:58.776158 kubelet[2160]: E0129 11:04:58.776119 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:04:58.876694 kubelet[2160]: E0129 11:04:58.876570 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:04:58.977027 kubelet[2160]: I0129 11:04:58.976853 2160 apiserver.go:52] "Watching apiserver" Jan 29 11:04:58.984851 kubelet[2160]: I0129 11:04:58.984725 2160 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:05:00.361314 systemd[1]: Reloading requested from client PID 2434 ('systemctl') (unit session-7.scope)... Jan 29 11:05:00.361330 systemd[1]: Reloading... Jan 29 11:05:00.429715 zram_generator::config[2476]: No configuration found. Jan 29 11:05:00.527571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:05:00.590393 systemd[1]: Reloading finished in 228 ms. Jan 29 11:05:00.625629 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:00.645547 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:05:00.646784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:00.654910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:00.741460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:00.745367 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:05:00.783144 kubelet[2515]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:05:00.783144 kubelet[2515]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:05:00.783144 kubelet[2515]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:05:00.783485 kubelet[2515]: I0129 11:05:00.783217 2515 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:05:00.790788 kubelet[2515]: I0129 11:05:00.790745 2515 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:05:00.790788 kubelet[2515]: I0129 11:05:00.790775 2515 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:05:00.790999 kubelet[2515]: I0129 11:05:00.790972 2515 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:05:00.792282 kubelet[2515]: I0129 11:05:00.792252 2515 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:05:00.794796 kubelet[2515]: I0129 11:05:00.794770 2515 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:05:00.797281 kubelet[2515]: E0129 11:05:00.797242 2515 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:05:00.797281 kubelet[2515]: I0129 11:05:00.797273 2515 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:05:00.800101 kubelet[2515]: I0129 11:05:00.799605 2515 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:05:00.800101 kubelet[2515]: I0129 11:05:00.799755 2515 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:05:00.800101 kubelet[2515]: I0129 11:05:00.799846 2515 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:05:00.800235 kubelet[2515]: I0129 11:05:00.799871 2515 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:05:00.800235 kubelet[2515]: I0129 11:05:00.800135 2515 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:05:00.800235 kubelet[2515]: I0129 11:05:00.800145 2515 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:05:00.800235 kubelet[2515]: I0129 11:05:00.800179 2515 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:05:00.800361 kubelet[2515]: I0129 11:05:00.800270 2515 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:05:00.800361 kubelet[2515]: I0129 11:05:00.800281 2515 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:05:00.800361 kubelet[2515]: I0129 11:05:00.800306 2515 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:05:00.800361 kubelet[2515]: I0129 11:05:00.800317 2515 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:05:00.804688 kubelet[2515]: I0129 11:05:00.803648 2515 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:05:00.804688 kubelet[2515]: I0129 11:05:00.804150 2515 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:05:00.806863 kubelet[2515]: I0129 11:05:00.805210 2515 server.go:1269] "Started kubelet" Jan 29 11:05:00.806863 kubelet[2515]: I0129 11:05:00.806726 2515 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:05:00.808292 kubelet[2515]: I0129 11:05:00.807431 2515 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:05:00.808292 kubelet[2515]: I0129 11:05:00.807471 2515 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:05:00.808292 kubelet[2515]: I0129 11:05:00.807510 2515 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:05:00.808518 kubelet[2515]: I0129 11:05:00.808493 2515 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:05:00.808665 kubelet[2515]: I0129 11:05:00.808645 2515 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:05:00.809249 kubelet[2515]: I0129 11:05:00.809229 2515 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:05:00.810183 kubelet[2515]: I0129 11:05:00.810099 2515 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:05:00.810466 kubelet[2515]: I0129 11:05:00.810449 2515 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:05:00.811238 kubelet[2515]: E0129 11:05:00.811219 2515 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:05:00.813323 kubelet[2515]: I0129 11:05:00.813281 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:05:00.815646 kubelet[2515]: I0129 11:05:00.815609 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:05:00.815646 kubelet[2515]: I0129 11:05:00.815644 2515 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:05:00.815764 kubelet[2515]: I0129 11:05:00.815660 2515 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:05:00.815764 kubelet[2515]: E0129 11:05:00.815713 2515 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:05:00.823964 kubelet[2515]: I0129 11:05:00.823938 2515 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:05:00.824711 kubelet[2515]: I0129 11:05:00.824146 2515 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:05:00.825057 kubelet[2515]: I0129 11:05:00.825038 2515 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:05:00.856966 kubelet[2515]: I0129 11:05:00.856923 2515 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:05:00.856966 kubelet[2515]: I0129 11:05:00.856943 2515 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:05:00.856966 kubelet[2515]: I0129 11:05:00.856960 2515 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:05:00.857115 kubelet[2515]: I0129 11:05:00.857094 2515 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:05:00.857115 kubelet[2515]: I0129 11:05:00.857104 2515 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:05:00.857160 kubelet[2515]: I0129 11:05:00.857121 2515 policy_none.go:49] "None policy: Start" Jan 29 11:05:00.857819 kubelet[2515]: I0129 11:05:00.857783 2515 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:05:00.857819 kubelet[2515]: I0129 11:05:00.857814 2515 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:05:00.857982 kubelet[2515]: I0129 11:05:00.857957 2515 state_mem.go:75] "Updated machine memory state" Jan 29 11:05:00.861585 kubelet[2515]: I0129 11:05:00.861558 2515 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:05:00.861952 kubelet[2515]: I0129 11:05:00.861745 2515 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:05:00.861952 kubelet[2515]: I0129 11:05:00.861762 2515 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:05:00.862050 kubelet[2515]: I0129 11:05:00.861957 2515 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:05:00.966555 kubelet[2515]: I0129 11:05:00.966353 2515 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:05:00.973108 kubelet[2515]: I0129 11:05:00.973080 2515 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 11:05:00.973195 kubelet[2515]: I0129 11:05:00.973165 2515 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:05:01.110060 kubelet[2515]: I0129 11:05:01.109998 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:05:01.110060 kubelet[2515]: I0129 11:05:01.110033 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1755880e646c39db872e8a1b0f601d1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1755880e646c39db872e8a1b0f601d1\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:05:01.110060 kubelet[2515]: I0129 11:05:01.110053 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1755880e646c39db872e8a1b0f601d1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f1755880e646c39db872e8a1b0f601d1\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:05:01.110060 kubelet[2515]: I0129 11:05:01.110070 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:05:01.110317 kubelet[2515]: I0129 11:05:01.110089 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:05:01.110317 kubelet[2515]: I0129 11:05:01.110107 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:05:01.110317 kubelet[2515]: I0129 11:05:01.110128 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:05:01.110317 kubelet[2515]: I0129 11:05:01.110143 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1755880e646c39db872e8a1b0f601d1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f1755880e646c39db872e8a1b0f601d1\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:05:01.110317 kubelet[2515]: I0129 11:05:01.110158 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:05:01.224229 kubelet[2515]: E0129 11:05:01.224115 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:01.224229 kubelet[2515]: E0129 11:05:01.224141 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:01.224229 kubelet[2515]: E0129 11:05:01.224119 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:01.367630 sudo[2551]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:05:01.367935 sudo[2551]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:05:01.793479 sudo[2551]: pam_unix(sudo:session): session closed for user root Jan 29 11:05:01.800803 kubelet[2515]: I0129 11:05:01.800771 2515 apiserver.go:52] "Watching apiserver" Jan 29 11:05:01.808843 kubelet[2515]: I0129 11:05:01.808811 2515 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:05:01.837882 kubelet[2515]: E0129 11:05:01.837847 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:01.838006 kubelet[2515]: E0129 11:05:01.837960 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:01.844783 kubelet[2515]: E0129 11:05:01.844742 2515 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:05:01.844955 kubelet[2515]: E0129 11:05:01.844934 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:01.857528 kubelet[2515]: I0129 11:05:01.857452 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.857438477 podStartE2EDuration="1.857438477s" podCreationTimestamp="2025-01-29 11:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:01.856571949 +0000 UTC m=+1.108107231" watchObservedRunningTime="2025-01-29 11:05:01.857438477 +0000 UTC m=+1.108973719" Jan 29 11:05:01.864904 kubelet[2515]: I0129 11:05:01.864839 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.864820571 podStartE2EDuration="1.864820571s" podCreationTimestamp="2025-01-29 11:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:01.864518498 +0000 UTC m=+1.116053780" watchObservedRunningTime="2025-01-29 11:05:01.864820571 +0000 UTC m=+1.116355853" Jan 29 11:05:01.890339 kubelet[2515]: I0129 11:05:01.890261 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.890244559 podStartE2EDuration="1.890244559s" podCreationTimestamp="2025-01-29 11:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:01.872565711 +0000 UTC m=+1.124100993" watchObservedRunningTime="2025-01-29 11:05:01.890244559 +0000 UTC m=+1.141779841" Jan 29 11:05:02.839056 kubelet[2515]: E0129 11:05:02.839017 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:04.011088 sudo[1622]: pam_unix(sudo:session): session closed for user root Jan 29 11:05:04.012199 sshd[1621]: Connection closed by 10.0.0.1 port 59626 Jan 29 11:05:04.012534 sshd-session[1619]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:04.015930 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:59626.service: Deactivated successfully. Jan 29 11:05:04.017661 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:05:04.017879 systemd[1]: session-7.scope: Consumed 7.195s CPU time, 153.5M memory peak, 0B memory swap peak. Jan 29 11:05:04.018342 systemd-logind[1421]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:05:04.019555 systemd-logind[1421]: Removed session 7. Jan 29 11:05:04.815221 kubelet[2515]: E0129 11:05:04.815180 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:04.964014 kubelet[2515]: I0129 11:05:04.963981 2515 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:05:04.964804 containerd[1442]: time="2025-01-29T11:05:04.964717362Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:05:04.965104 kubelet[2515]: I0129 11:05:04.964911 2515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:05:05.694763 systemd[1]: Created slice kubepods-besteffort-pod5260700c_b07c_4fc7_ace9_bd6494f7e371.slice - libcontainer container kubepods-besteffort-pod5260700c_b07c_4fc7_ace9_bd6494f7e371.slice. Jan 29 11:05:05.714056 systemd[1]: Created slice kubepods-burstable-podd7c5cf1b_71fe_424b_9a5a_e4fb37bd520d.slice - libcontainer container kubepods-burstable-podd7c5cf1b_71fe_424b_9a5a_e4fb37bd520d.slice. Jan 29 11:05:05.747406 kubelet[2515]: I0129 11:05:05.747350 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-host-proc-sys-kernel\") pod \"cilium-jl55c\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " pod="kube-system/cilium-jl55c" Jan 29 11:05:05.747406 kubelet[2515]: I0129 11:05:05.747397 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cni-path\") pod \"cilium-jl55c\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " pod="kube-system/cilium-jl55c" Jan 29 11:05:05.747406 kubelet[2515]: I0129 11:05:05.747418 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-etc-cni-netd\") pod \"cilium-jl55c\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " pod="kube-system/cilium-jl55c" Jan 29 11:05:05.747603 kubelet[2515]: I0129 11:05:05.747437 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5260700c-b07c-4fc7-ace9-bd6494f7e371-kube-proxy\") pod \"kube-proxy-rwfvr\" (UID: \"5260700c-b07c-4fc7-ace9-bd6494f7e371\") " pod="kube-system/kube-proxy-rwfvr" Jan 29 11:05:05.747603 kubelet[2515]: I0129 11:05:05.747453 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5260700c-b07c-4fc7-ace9-bd6494f7e371-lib-modules\") pod \"kube-proxy-rwfvr\" (UID: \"5260700c-b07c-4fc7-ace9-bd6494f7e371\") " pod="kube-system/kube-proxy-rwfvr" Jan 29 11:05:05.747603 kubelet[2515]: I0129 11:05:05.747468 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-host-proc-sys-net\") pod \"cilium-jl55c\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " pod="kube-system/cilium-jl55c" Jan 29 11:05:05.747603 kubelet[2515]: I0129 11:05:05.747482 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-xtables-lock\") pod \"cilium-jl55c\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " pod="kube-system/cilium-jl55c" Jan 29 11:05:05.747603 kubelet[2515]: I0129 11:05:05.747506 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-bpf-maps\") pod \"cilium-jl55c\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " pod="kube-system/cilium-jl55c" Jan 29 11:05:05.747603 kubelet[2515]: I0129 11:05:05.747523 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-hostproc\") pod \"cilium-jl55c\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " pod="kube-system/cilium-jl55c" Jan 29 11:05:05.747758 kubelet[2515]: I0129 11:05:05.747539 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cilium-cgroup\") pod \"cilium-jl55c\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " pod="kube-system/cilium-jl55c" Jan 29 11:05:05.747758 kubelet[2515]: I0129 11:05:05.747571 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5nh\" (UniqueName: \"kubernetes.io/projected/5260700c-b07c-4fc7-ace9-bd6494f7e371-kube-api-access-7g5nh\") pod \"kube-proxy-rwfvr\" (UID: \"5260700c-b07c-4fc7-ace9-bd6494f7e371\") " pod="kube-system/kube-proxy-rwfvr" Jan 29 11:05:05.747758 kubelet[2515]: I0129 11:05:05.747592 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cilium-run\") pod \"cilium-jl55c\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " pod="kube-system/cilium-jl55c" Jan 29 11:05:05.747758 kubelet[2515]: I0129 11:05:05.747609 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-clustermesh-secrets\") pod \"cilium-jl55c\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " pod="kube-system/cilium-jl55c" Jan 29 11:05:05.747758 kubelet[2515]: I0129 11:05:05.747625 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ncgj\" (UniqueName: \"kubernetes.io/projected/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-kube-api-access-6ncgj\") pod \"cilium-jl55c\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " pod="kube-system/cilium-jl55c" Jan 29 11:05:05.747857 kubelet[2515]: I0129 11:05:05.747640 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cilium-config-path\") pod \"cilium-jl55c\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " pod="kube-system/cilium-jl55c" Jan 29 11:05:05.747857 kubelet[2515]: I0129 11:05:05.747656 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-hubble-tls\") pod \"cilium-jl55c\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " pod="kube-system/cilium-jl55c" Jan 29 11:05:05.747857 kubelet[2515]: I0129 11:05:05.747674 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5260700c-b07c-4fc7-ace9-bd6494f7e371-xtables-lock\") pod \"kube-proxy-rwfvr\" (UID: \"5260700c-b07c-4fc7-ace9-bd6494f7e371\") " pod="kube-system/kube-proxy-rwfvr" Jan 29 11:05:05.747857 kubelet[2515]: I0129 11:05:05.747709 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-lib-modules\") pod \"cilium-jl55c\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " pod="kube-system/cilium-jl55c" Jan 29 11:05:06.009630 kubelet[2515]: E0129 11:05:06.009503 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:06.010787 containerd[1442]: time="2025-01-29T11:05:06.010579477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rwfvr,Uid:5260700c-b07c-4fc7-ace9-bd6494f7e371,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:06.018978 kubelet[2515]: E0129 11:05:06.018926 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:06.019490 containerd[1442]: time="2025-01-29T11:05:06.019441968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jl55c,Uid:d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:06.110753 containerd[1442]: time="2025-01-29T11:05:06.110256518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:06.110753 containerd[1442]: time="2025-01-29T11:05:06.110312408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:06.110753 containerd[1442]: time="2025-01-29T11:05:06.110328931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:06.110753 containerd[1442]: time="2025-01-29T11:05:06.110401065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:06.113220 containerd[1442]: time="2025-01-29T11:05:06.113134882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:06.113220 containerd[1442]: time="2025-01-29T11:05:06.113201974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:06.113320 containerd[1442]: time="2025-01-29T11:05:06.113224098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:06.113320 containerd[1442]: time="2025-01-29T11:05:06.113293990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:06.115578 systemd[1]: Created slice kubepods-besteffort-pod4c41a563_e669_47f2_849f_60a54a4d2387.slice - libcontainer container kubepods-besteffort-pod4c41a563_e669_47f2_849f_60a54a4d2387.slice. Jan 29 11:05:06.135885 systemd[1]: Started cri-containerd-221653bff4e78232c93a4b6e3f6afe633de5d6e529b6134dedac02f8486a94ff.scope - libcontainer container 221653bff4e78232c93a4b6e3f6afe633de5d6e529b6134dedac02f8486a94ff. Jan 29 11:05:06.138235 systemd[1]: Started cri-containerd-f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0.scope - libcontainer container f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0. Jan 29 11:05:06.150539 kubelet[2515]: I0129 11:05:06.150504 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c41a563-e669-47f2-849f-60a54a4d2387-cilium-config-path\") pod \"cilium-operator-5d85765b45-4n6hd\" (UID: \"4c41a563-e669-47f2-849f-60a54a4d2387\") " pod="kube-system/cilium-operator-5d85765b45-4n6hd" Jan 29 11:05:06.150718 kubelet[2515]: I0129 11:05:06.150542 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mszpr\" (UniqueName: \"kubernetes.io/projected/4c41a563-e669-47f2-849f-60a54a4d2387-kube-api-access-mszpr\") pod \"cilium-operator-5d85765b45-4n6hd\" (UID: \"4c41a563-e669-47f2-849f-60a54a4d2387\") " pod="kube-system/cilium-operator-5d85765b45-4n6hd" Jan 29 11:05:06.160098 containerd[1442]: time="2025-01-29T11:05:06.160060413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rwfvr,Uid:5260700c-b07c-4fc7-ace9-bd6494f7e371,Namespace:kube-system,Attempt:0,} returns sandbox id \"221653bff4e78232c93a4b6e3f6afe633de5d6e529b6134dedac02f8486a94ff\"" Jan 29 11:05:06.160755 kubelet[2515]: E0129 11:05:06.160732 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:06.164019 containerd[1442]: time="2025-01-29T11:05:06.163984286Z" level=info msg="CreateContainer within sandbox \"221653bff4e78232c93a4b6e3f6afe633de5d6e529b6134dedac02f8486a94ff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:05:06.166912 containerd[1442]: time="2025-01-29T11:05:06.166839085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jl55c,Uid:d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0\"" Jan 29 11:05:06.167538 kubelet[2515]: E0129 11:05:06.167517 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:06.168627 containerd[1442]: time="2025-01-29T11:05:06.168590844Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:05:06.192039 containerd[1442]: time="2025-01-29T11:05:06.191955732Z" level=info msg="CreateContainer within sandbox \"221653bff4e78232c93a4b6e3f6afe633de5d6e529b6134dedac02f8486a94ff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e956065aaa0a04013efef3511bef6ff34c1eb515588504970a36dcccf7a8c0d5\"" Jan 29 11:05:06.193296 containerd[1442]: time="2025-01-29T11:05:06.193259089Z" level=info msg="StartContainer for \"e956065aaa0a04013efef3511bef6ff34c1eb515588504970a36dcccf7a8c0d5\"" Jan 29 11:05:06.218870 systemd[1]: Started cri-containerd-e956065aaa0a04013efef3511bef6ff34c1eb515588504970a36dcccf7a8c0d5.scope - libcontainer container e956065aaa0a04013efef3511bef6ff34c1eb515588504970a36dcccf7a8c0d5. Jan 29 11:05:06.242247 containerd[1442]: time="2025-01-29T11:05:06.242207428Z" level=info msg="StartContainer for \"e956065aaa0a04013efef3511bef6ff34c1eb515588504970a36dcccf7a8c0d5\" returns successfully" Jan 29 11:05:06.419483 kubelet[2515]: E0129 11:05:06.419443 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:06.420480 containerd[1442]: time="2025-01-29T11:05:06.420435390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4n6hd,Uid:4c41a563-e669-47f2-849f-60a54a4d2387,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:06.449244 containerd[1442]: time="2025-01-29T11:05:06.448531218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:06.449244 containerd[1442]: time="2025-01-29T11:05:06.448592989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:06.449822 containerd[1442]: time="2025-01-29T11:05:06.449756721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:06.449897 containerd[1442]: time="2025-01-29T11:05:06.449866621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:06.469861 systemd[1]: Started cri-containerd-3b30f2fd90c9521c36e1f4a3b8c9c531a0a5785cc7cd82124ab3ada9aa3832d0.scope - libcontainer container 3b30f2fd90c9521c36e1f4a3b8c9c531a0a5785cc7cd82124ab3ada9aa3832d0. Jan 29 11:05:06.500597 containerd[1442]: time="2025-01-29T11:05:06.497577495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4n6hd,Uid:4c41a563-e669-47f2-849f-60a54a4d2387,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b30f2fd90c9521c36e1f4a3b8c9c531a0a5785cc7cd82124ab3ada9aa3832d0\"" Jan 29 11:05:06.502650 kubelet[2515]: E0129 11:05:06.498435 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:06.846435 kubelet[2515]: E0129 11:05:06.846382 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:08.249102 kubelet[2515]: E0129 11:05:08.246006 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:08.265778 kubelet[2515]: I0129 11:05:08.265623 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rwfvr" podStartSLOduration=3.265604451 podStartE2EDuration="3.265604451s" podCreationTimestamp="2025-01-29 11:05:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:06.856635933 +0000 UTC m=+6.108171215" watchObservedRunningTime="2025-01-29 11:05:08.265604451 +0000 UTC m=+7.517139733" Jan 29 11:05:08.626455 kubelet[2515]: E0129 11:05:08.626368 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:08.851358 kubelet[2515]: E0129 11:05:08.851311 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:08.851485 kubelet[2515]: E0129 11:05:08.851448 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:11.841475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount62259207.mount: Deactivated successfully. Jan 29 11:05:13.101176 update_engine[1425]: I20250129 11:05:13.100720 1425 update_attempter.cc:509] Updating boot flags... Jan 29 11:05:13.215753 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2914) Jan 29 11:05:13.283423 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2914) Jan 29 11:05:13.306865 containerd[1442]: time="2025-01-29T11:05:13.306801494Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:13.307450 containerd[1442]: time="2025-01-29T11:05:13.307396489Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 11:05:13.308883 containerd[1442]: time="2025-01-29T11:05:13.308840911Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:13.309708 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2914) Jan 29 11:05:13.310641 containerd[1442]: time="2025-01-29T11:05:13.310607294Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.141976564s" Jan 29 11:05:13.311874 containerd[1442]: time="2025-01-29T11:05:13.311794803Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 11:05:13.324183 containerd[1442]: time="2025-01-29T11:05:13.324130758Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:05:13.327192 containerd[1442]: time="2025-01-29T11:05:13.327134096Z" level=info msg="CreateContainer within sandbox \"f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:05:13.355754 containerd[1442]: time="2025-01-29T11:05:13.355650369Z" level=info msg="CreateContainer within sandbox \"f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba\"" Jan 29 11:05:13.356063 containerd[1442]: time="2025-01-29T11:05:13.356039098Z" level=info msg="StartContainer for \"57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba\"" Jan 29 11:05:13.389855 systemd[1]: Started cri-containerd-57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba.scope - libcontainer container 57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba. Jan 29 11:05:13.434957 containerd[1442]: time="2025-01-29T11:05:13.434883313Z" level=info msg="StartContainer for \"57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba\" returns successfully" Jan 29 11:05:13.473462 systemd[1]: cri-containerd-57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba.scope: Deactivated successfully. Jan 29 11:05:13.497357 containerd[1442]: time="2025-01-29T11:05:13.492656272Z" level=info msg="shim disconnected" id=57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba namespace=k8s.io Jan 29 11:05:13.497357 containerd[1442]: time="2025-01-29T11:05:13.497347943Z" level=warning msg="cleaning up after shim disconnected" id=57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba namespace=k8s.io Jan 29 11:05:13.497357 containerd[1442]: time="2025-01-29T11:05:13.497359785Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:05:13.866874 kubelet[2515]: E0129 11:05:13.866843 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:13.870982 containerd[1442]: time="2025-01-29T11:05:13.869119587Z" level=info msg="CreateContainer within sandbox \"f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:05:13.881039 containerd[1442]: time="2025-01-29T11:05:13.880944477Z" level=info msg="CreateContainer within sandbox \"f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5\"" Jan 29 11:05:13.881552 containerd[1442]: time="2025-01-29T11:05:13.881465142Z" level=info msg="StartContainer for \"067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5\"" Jan 29 11:05:13.908857 systemd[1]: Started cri-containerd-067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5.scope - libcontainer container 067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5. Jan 29 11:05:13.932018 containerd[1442]: time="2025-01-29T11:05:13.931969466Z" level=info msg="StartContainer for \"067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5\" returns successfully" Jan 29 11:05:13.963797 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:05:13.964024 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:05:13.964094 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:05:13.972747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:05:13.972983 systemd[1]: cri-containerd-067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5.scope: Deactivated successfully. Jan 29 11:05:13.986283 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:05:13.991326 containerd[1442]: time="2025-01-29T11:05:13.991210931Z" level=info msg="shim disconnected" id=067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5 namespace=k8s.io Jan 29 11:05:13.991326 containerd[1442]: time="2025-01-29T11:05:13.991264857Z" level=warning msg="cleaning up after shim disconnected" id=067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5 namespace=k8s.io Jan 29 11:05:13.991326 containerd[1442]: time="2025-01-29T11:05:13.991272938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:05:14.353657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba-rootfs.mount: Deactivated successfully. Jan 29 11:05:14.632234 containerd[1442]: time="2025-01-29T11:05:14.632112028Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:14.632797 containerd[1442]: time="2025-01-29T11:05:14.632750745Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 11:05:14.633505 containerd[1442]: time="2025-01-29T11:05:14.633464311Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:14.635029 containerd[1442]: time="2025-01-29T11:05:14.634994734Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.31081433s" Jan 29 11:05:14.635071 containerd[1442]: time="2025-01-29T11:05:14.635032739Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 11:05:14.636986 containerd[1442]: time="2025-01-29T11:05:14.636947528Z" level=info msg="CreateContainer within sandbox \"3b30f2fd90c9521c36e1f4a3b8c9c531a0a5785cc7cd82124ab3ada9aa3832d0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:05:14.646379 containerd[1442]: time="2025-01-29T11:05:14.646296569Z" level=info msg="CreateContainer within sandbox \"3b30f2fd90c9521c36e1f4a3b8c9c531a0a5785cc7cd82124ab3ada9aa3832d0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af\"" Jan 29 11:05:14.646774 containerd[1442]: time="2025-01-29T11:05:14.646738542Z" level=info msg="StartContainer for \"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af\"" Jan 29 11:05:14.672846 systemd[1]: Started cri-containerd-649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af.scope - libcontainer container 649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af. Jan 29 11:05:14.695694 containerd[1442]: time="2025-01-29T11:05:14.695617402Z" level=info msg="StartContainer for \"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af\" returns successfully" Jan 29 11:05:14.836940 kubelet[2515]: E0129 11:05:14.836891 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:14.866008 kubelet[2515]: E0129 11:05:14.864677 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:14.871888 kubelet[2515]: E0129 11:05:14.871701 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:14.876592 containerd[1442]: time="2025-01-29T11:05:14.876535453Z" level=info msg="CreateContainer within sandbox \"f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:05:14.898019 kubelet[2515]: I0129 11:05:14.897325 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-4n6hd" podStartSLOduration=0.760867529 podStartE2EDuration="8.897308264s" podCreationTimestamp="2025-01-29 11:05:06 +0000 UTC" firstStartedPulling="2025-01-29 11:05:06.499303529 +0000 UTC m=+5.750838811" lastFinishedPulling="2025-01-29 11:05:14.635744304 +0000 UTC m=+13.887279546" observedRunningTime="2025-01-29 11:05:14.897143124 +0000 UTC m=+14.148678406" watchObservedRunningTime="2025-01-29 11:05:14.897308264 +0000 UTC m=+14.148843546" Jan 29 11:05:14.910892 containerd[1442]: time="2025-01-29T11:05:14.910839726Z" level=info msg="CreateContainer within sandbox \"f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d\"" Jan 29 11:05:14.911493 containerd[1442]: time="2025-01-29T11:05:14.911465961Z" level=info msg="StartContainer for \"6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d\"" Jan 29 11:05:14.941893 systemd[1]: Started cri-containerd-6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d.scope - libcontainer container 6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d. Jan 29 11:05:14.967633 containerd[1442]: time="2025-01-29T11:05:14.966646697Z" level=info msg="StartContainer for \"6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d\" returns successfully" Jan 29 11:05:14.985331 systemd[1]: cri-containerd-6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d.scope: Deactivated successfully. Jan 29 11:05:15.006575 containerd[1442]: time="2025-01-29T11:05:15.006503882Z" level=info msg="shim disconnected" id=6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d namespace=k8s.io Jan 29 11:05:15.006575 containerd[1442]: time="2025-01-29T11:05:15.006562409Z" level=warning msg="cleaning up after shim disconnected" id=6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d namespace=k8s.io Jan 29 11:05:15.006575 containerd[1442]: time="2025-01-29T11:05:15.006573330Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:05:15.875036 kubelet[2515]: E0129 11:05:15.874891 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:15.875484 kubelet[2515]: E0129 11:05:15.875212 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:15.878101 containerd[1442]: time="2025-01-29T11:05:15.878053745Z" level=info msg="CreateContainer within sandbox \"f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:05:15.890647 containerd[1442]: time="2025-01-29T11:05:15.890603098Z" level=info msg="CreateContainer within sandbox \"f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2\"" Jan 29 11:05:15.891659 containerd[1442]: time="2025-01-29T11:05:15.891616894Z" level=info msg="StartContainer for \"f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2\"" Jan 29 11:05:15.924904 systemd[1]: Started cri-containerd-f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2.scope - libcontainer container f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2. Jan 29 11:05:15.945759 systemd[1]: cri-containerd-f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2.scope: Deactivated successfully. Jan 29 11:05:15.947920 containerd[1442]: time="2025-01-29T11:05:15.947872596Z" level=info msg="StartContainer for \"f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2\" returns successfully" Jan 29 11:05:15.968883 containerd[1442]: time="2025-01-29T11:05:15.968765142Z" level=info msg="shim disconnected" id=f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2 namespace=k8s.io Jan 29 11:05:15.968883 containerd[1442]: time="2025-01-29T11:05:15.968844831Z" level=warning msg="cleaning up after shim disconnected" id=f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2 namespace=k8s.io Jan 29 11:05:15.968883 containerd[1442]: time="2025-01-29T11:05:15.968852992Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:05:16.353261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2-rootfs.mount: Deactivated successfully. Jan 29 11:05:16.878520 kubelet[2515]: E0129 11:05:16.878491 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:16.881696 containerd[1442]: time="2025-01-29T11:05:16.881654283Z" level=info msg="CreateContainer within sandbox \"f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:05:16.903923 containerd[1442]: time="2025-01-29T11:05:16.903798732Z" level=info msg="CreateContainer within sandbox \"f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c\"" Jan 29 11:05:16.905025 containerd[1442]: time="2025-01-29T11:05:16.904996622Z" level=info msg="StartContainer for \"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c\"" Jan 29 11:05:16.934879 systemd[1]: Started cri-containerd-0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c.scope - libcontainer container 0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c. Jan 29 11:05:16.964296 containerd[1442]: time="2025-01-29T11:05:16.964239428Z" level=info msg="StartContainer for \"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c\" returns successfully" Jan 29 11:05:17.147692 kubelet[2515]: I0129 11:05:17.147520 2515 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:05:17.239056 systemd[1]: Created slice kubepods-burstable-pod2512f038_4567_41c8_9913_6b43458c0573.slice - libcontainer container kubepods-burstable-pod2512f038_4567_41c8_9913_6b43458c0573.slice. Jan 29 11:05:17.244191 systemd[1]: Created slice kubepods-burstable-pod75354174_c593_45c6_8c81_377670302844.slice - libcontainer container kubepods-burstable-pod75354174_c593_45c6_8c81_377670302844.slice. Jan 29 11:05:17.330640 kubelet[2515]: I0129 11:05:17.330556 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l586b\" (UniqueName: \"kubernetes.io/projected/75354174-c593-45c6-8c81-377670302844-kube-api-access-l586b\") pod \"coredns-6f6b679f8f-wrrrr\" (UID: \"75354174-c593-45c6-8c81-377670302844\") " pod="kube-system/coredns-6f6b679f8f-wrrrr" Jan 29 11:05:17.330640 kubelet[2515]: I0129 11:05:17.330601 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2512f038-4567-41c8-9913-6b43458c0573-config-volume\") pod \"coredns-6f6b679f8f-fz6dj\" (UID: \"2512f038-4567-41c8-9913-6b43458c0573\") " pod="kube-system/coredns-6f6b679f8f-fz6dj" Jan 29 11:05:17.330813 kubelet[2515]: I0129 11:05:17.330659 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxwqn\" (UniqueName: \"kubernetes.io/projected/2512f038-4567-41c8-9913-6b43458c0573-kube-api-access-cxwqn\") pod \"coredns-6f6b679f8f-fz6dj\" (UID: \"2512f038-4567-41c8-9913-6b43458c0573\") " pod="kube-system/coredns-6f6b679f8f-fz6dj" Jan 29 11:05:17.330813 kubelet[2515]: I0129 11:05:17.330713 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75354174-c593-45c6-8c81-377670302844-config-volume\") pod \"coredns-6f6b679f8f-wrrrr\" (UID: \"75354174-c593-45c6-8c81-377670302844\") " pod="kube-system/coredns-6f6b679f8f-wrrrr" Jan 29 11:05:17.542917 kubelet[2515]: E0129 11:05:17.542716 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:17.544901 containerd[1442]: time="2025-01-29T11:05:17.543427992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fz6dj,Uid:2512f038-4567-41c8-9913-6b43458c0573,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:17.546362 kubelet[2515]: E0129 11:05:17.546327 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:17.547056 containerd[1442]: time="2025-01-29T11:05:17.547017604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wrrrr,Uid:75354174-c593-45c6-8c81-377670302844,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:17.882435 kubelet[2515]: E0129 11:05:17.882405 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:17.907510 kubelet[2515]: I0129 11:05:17.907123 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jl55c" podStartSLOduration=5.751372727 podStartE2EDuration="12.90710605s" podCreationTimestamp="2025-01-29 11:05:05 +0000 UTC" firstStartedPulling="2025-01-29 11:05:06.168146763 +0000 UTC m=+5.419682045" lastFinishedPulling="2025-01-29 11:05:13.323880086 +0000 UTC m=+12.575415368" observedRunningTime="2025-01-29 11:05:17.906754213 +0000 UTC m=+17.158289535" watchObservedRunningTime="2025-01-29 11:05:17.90710605 +0000 UTC m=+17.158641292" Jan 29 11:05:18.884567 kubelet[2515]: E0129 11:05:18.884525 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:19.189779 systemd-networkd[1372]: cilium_host: Link UP Jan 29 11:05:19.196161 systemd-networkd[1372]: cilium_net: Link UP Jan 29 11:05:19.196768 systemd-networkd[1372]: cilium_net: Gained carrier Jan 29 11:05:19.197247 systemd-networkd[1372]: cilium_host: Gained carrier Jan 29 11:05:19.197673 systemd-networkd[1372]: cilium_net: Gained IPv6LL Jan 29 11:05:19.198210 systemd-networkd[1372]: cilium_host: Gained IPv6LL Jan 29 11:05:19.279586 systemd-networkd[1372]: cilium_vxlan: Link UP Jan 29 11:05:19.279608 systemd-networkd[1372]: cilium_vxlan: Gained carrier Jan 29 11:05:19.599775 kernel: NET: Registered PF_ALG protocol family Jan 29 11:05:19.886501 kubelet[2515]: E0129 11:05:19.886231 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:20.215146 systemd-networkd[1372]: lxc_health: Link UP Jan 29 11:05:20.223844 systemd-networkd[1372]: lxc_health: Gained carrier Jan 29 11:05:20.413886 systemd-networkd[1372]: cilium_vxlan: Gained IPv6LL Jan 29 11:05:20.653276 systemd-networkd[1372]: lxc2e383c15a79d: Link UP Jan 29 11:05:20.653879 systemd-networkd[1372]: lxc57b062524f23: Link UP Jan 29 11:05:20.669717 kernel: eth0: renamed from tmpf0dc2 Jan 29 11:05:20.683850 kernel: eth0: renamed from tmpef573 Jan 29 11:05:20.694451 systemd-networkd[1372]: lxc2e383c15a79d: Gained carrier Jan 29 11:05:20.694992 systemd-networkd[1372]: lxc57b062524f23: Gained carrier Jan 29 11:05:21.757891 systemd-networkd[1372]: lxc_health: Gained IPv6LL Jan 29 11:05:22.023035 kubelet[2515]: E0129 11:05:22.022728 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:22.461895 systemd-networkd[1372]: lxc57b062524f23: Gained IPv6LL Jan 29 11:05:22.718850 systemd-networkd[1372]: lxc2e383c15a79d: Gained IPv6LL Jan 29 11:05:22.891008 kubelet[2515]: E0129 11:05:22.890968 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:23.892693 kubelet[2515]: E0129 11:05:23.892644 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:24.252862 containerd[1442]: time="2025-01-29T11:05:24.252480936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:24.252862 containerd[1442]: time="2025-01-29T11:05:24.252535340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:24.252862 containerd[1442]: time="2025-01-29T11:05:24.252557462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:24.252862 containerd[1442]: time="2025-01-29T11:05:24.252631228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:24.253547 containerd[1442]: time="2025-01-29T11:05:24.252985535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:24.253547 containerd[1442]: time="2025-01-29T11:05:24.253477452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:24.253547 containerd[1442]: time="2025-01-29T11:05:24.253490613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:24.253790 containerd[1442]: time="2025-01-29T11:05:24.253570299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:24.268182 systemd[1]: run-containerd-runc-k8s.io-f0dc28f2f2b927ea129d42e205d94a01626d433481a65b13ebff7f7ef6bec253-runc.Ub0oTy.mount: Deactivated successfully. Jan 29 11:05:24.285903 systemd[1]: Started cri-containerd-ef5737cf37b2a3bf94204b53fab40c7bc4d9a42563e5b0fc4d4a62cdcc18c868.scope - libcontainer container ef5737cf37b2a3bf94204b53fab40c7bc4d9a42563e5b0fc4d4a62cdcc18c868. Jan 29 11:05:24.287111 systemd[1]: Started cri-containerd-f0dc28f2f2b927ea129d42e205d94a01626d433481a65b13ebff7f7ef6bec253.scope - libcontainer container f0dc28f2f2b927ea129d42e205d94a01626d433481a65b13ebff7f7ef6bec253. Jan 29 11:05:24.299783 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:05:24.300570 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:05:24.318598 containerd[1442]: time="2025-01-29T11:05:24.318530898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wrrrr,Uid:75354174-c593-45c6-8c81-377670302844,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0dc28f2f2b927ea129d42e205d94a01626d433481a65b13ebff7f7ef6bec253\"" Jan 29 11:05:24.322812 kubelet[2515]: E0129 11:05:24.322783 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:24.324234 containerd[1442]: time="2025-01-29T11:05:24.324199450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fz6dj,Uid:2512f038-4567-41c8-9913-6b43458c0573,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef5737cf37b2a3bf94204b53fab40c7bc4d9a42563e5b0fc4d4a62cdcc18c868\"" Jan 29 11:05:24.325455 kubelet[2515]: E0129 11:05:24.325434 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:24.326108 containerd[1442]: time="2025-01-29T11:05:24.326057152Z" level=info msg="CreateContainer within sandbox \"f0dc28f2f2b927ea129d42e205d94a01626d433481a65b13ebff7f7ef6bec253\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:05:24.327905 containerd[1442]: time="2025-01-29T11:05:24.327872411Z" level=info msg="CreateContainer within sandbox \"ef5737cf37b2a3bf94204b53fab40c7bc4d9a42563e5b0fc4d4a62cdcc18c868\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:05:24.359705 containerd[1442]: time="2025-01-29T11:05:24.359629274Z" level=info msg="CreateContainer within sandbox \"f0dc28f2f2b927ea129d42e205d94a01626d433481a65b13ebff7f7ef6bec253\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"648e462e1ec2b08ebb26d25ccb0075480197c4096324ac9b2da95d5e84a9155e\"" Jan 29 11:05:24.360323 containerd[1442]: time="2025-01-29T11:05:24.360284244Z" level=info msg="StartContainer for \"648e462e1ec2b08ebb26d25ccb0075480197c4096324ac9b2da95d5e84a9155e\"" Jan 29 11:05:24.363115 containerd[1442]: time="2025-01-29T11:05:24.362723111Z" level=info msg="CreateContainer within sandbox \"ef5737cf37b2a3bf94204b53fab40c7bc4d9a42563e5b0fc4d4a62cdcc18c868\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2d5c51dca343ffb1fa9549d80a9eb2d074a0ebdb191e17eaf26f5c058ce3af6\"" Jan 29 11:05:24.363702 containerd[1442]: time="2025-01-29T11:05:24.363671223Z" level=info msg="StartContainer for \"e2d5c51dca343ffb1fa9549d80a9eb2d074a0ebdb191e17eaf26f5c058ce3af6\"" Jan 29 11:05:24.384873 systemd[1]: Started cri-containerd-648e462e1ec2b08ebb26d25ccb0075480197c4096324ac9b2da95d5e84a9155e.scope - libcontainer container 648e462e1ec2b08ebb26d25ccb0075480197c4096324ac9b2da95d5e84a9155e. Jan 29 11:05:24.387235 systemd[1]: Started cri-containerd-e2d5c51dca343ffb1fa9549d80a9eb2d074a0ebdb191e17eaf26f5c058ce3af6.scope - libcontainer container e2d5c51dca343ffb1fa9549d80a9eb2d074a0ebdb191e17eaf26f5c058ce3af6. Jan 29 11:05:24.432508 containerd[1442]: time="2025-01-29T11:05:24.430765904Z" level=info msg="StartContainer for \"648e462e1ec2b08ebb26d25ccb0075480197c4096324ac9b2da95d5e84a9155e\" returns successfully" Jan 29 11:05:24.432508 containerd[1442]: time="2025-01-29T11:05:24.430866632Z" level=info msg="StartContainer for \"e2d5c51dca343ffb1fa9549d80a9eb2d074a0ebdb191e17eaf26f5c058ce3af6\" returns successfully" Jan 29 11:05:24.896712 kubelet[2515]: E0129 11:05:24.896598 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:24.898635 kubelet[2515]: E0129 11:05:24.898505 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:24.931016 kubelet[2515]: I0129 11:05:24.930558 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wrrrr" podStartSLOduration=18.93053805 podStartE2EDuration="18.93053805s" podCreationTimestamp="2025-01-29 11:05:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:24.918922963 +0000 UTC m=+24.170458245" watchObservedRunningTime="2025-01-29 11:05:24.93053805 +0000 UTC m=+24.182073332" Jan 29 11:05:24.941428 kubelet[2515]: I0129 11:05:24.941364 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fz6dj" podStartSLOduration=18.941346435 podStartE2EDuration="18.941346435s" podCreationTimestamp="2025-01-29 11:05:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:24.930780868 +0000 UTC m=+24.182316110" watchObservedRunningTime="2025-01-29 11:05:24.941346435 +0000 UTC m=+24.192881717" Jan 29 11:05:25.899985 kubelet[2515]: E0129 11:05:25.899956 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:25.900318 kubelet[2515]: E0129 11:05:25.900023 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:26.902124 kubelet[2515]: E0129 11:05:26.902086 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:05:27.281729 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:47580.service - OpenSSH per-connection server daemon (10.0.0.1:47580). Jan 29 11:05:27.335465 sshd[3922]: Accepted publickey for core from 10.0.0.1 port 47580 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:05:27.337013 sshd-session[3922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:27.341532 systemd-logind[1421]: New session 8 of user core. Jan 29 11:05:27.358908 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:05:27.486483 sshd[3924]: Connection closed by 10.0.0.1 port 47580 Jan 29 11:05:27.486839 sshd-session[3922]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:27.489952 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:47580.service: Deactivated successfully. Jan 29 11:05:27.491512 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:05:27.492105 systemd-logind[1421]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:05:27.493150 systemd-logind[1421]: Removed session 8. Jan 29 11:05:32.501608 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:56472.service - OpenSSH per-connection server daemon (10.0.0.1:56472). Jan 29 11:05:32.546729 sshd[3939]: Accepted publickey for core from 10.0.0.1 port 56472 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:05:32.548001 sshd-session[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:32.553798 systemd-logind[1421]: New session 9 of user core. Jan 29 11:05:32.563843 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:05:32.689551 sshd[3941]: Connection closed by 10.0.0.1 port 56472 Jan 29 11:05:32.689932 sshd-session[3939]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:32.693466 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:56472.service: Deactivated successfully. Jan 29 11:05:32.695308 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:05:32.699197 systemd-logind[1421]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:05:32.701026 systemd-logind[1421]: Removed session 9. Jan 29 11:05:37.701642 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:56474.service - OpenSSH per-connection server daemon (10.0.0.1:56474). Jan 29 11:05:37.752513 sshd[3959]: Accepted publickey for core from 10.0.0.1 port 56474 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:05:37.753869 sshd-session[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:37.758300 systemd-logind[1421]: New session 10 of user core. Jan 29 11:05:37.768200 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:05:37.902245 sshd[3961]: Connection closed by 10.0.0.1 port 56474 Jan 29 11:05:37.902995 sshd-session[3959]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:37.906577 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:56474.service: Deactivated successfully. Jan 29 11:05:37.909543 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:05:37.913624 systemd-logind[1421]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:05:37.914767 systemd-logind[1421]: Removed session 10. Jan 29 11:05:42.915550 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:50906.service - OpenSSH per-connection server daemon (10.0.0.1:50906). Jan 29 11:05:42.961617 sshd[3974]: Accepted publickey for core from 10.0.0.1 port 50906 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:05:42.963885 sshd-session[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:42.969112 systemd-logind[1421]: New session 11 of user core. Jan 29 11:05:42.973967 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:05:43.093392 sshd[3976]: Connection closed by 10.0.0.1 port 50906 Jan 29 11:05:43.093783 sshd-session[3974]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:43.104354 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:50906.service: Deactivated successfully. Jan 29 11:05:43.106060 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:05:43.107872 systemd-logind[1421]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:05:43.109127 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:50912.service - OpenSSH per-connection server daemon (10.0.0.1:50912). Jan 29 11:05:43.110867 systemd-logind[1421]: Removed session 11. Jan 29 11:05:43.171020 sshd[3990]: Accepted publickey for core from 10.0.0.1 port 50912 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:05:43.171589 sshd-session[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:43.175819 systemd-logind[1421]: New session 12 of user core. Jan 29 11:05:43.185872 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:05:43.342831 sshd[3992]: Connection closed by 10.0.0.1 port 50912 Jan 29 11:05:43.342800 sshd-session[3990]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:43.355474 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:50912.service: Deactivated successfully. Jan 29 11:05:43.361061 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:05:43.362928 systemd-logind[1421]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:05:43.372251 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:50928.service - OpenSSH per-connection server daemon (10.0.0.1:50928). Jan 29 11:05:43.373721 systemd-logind[1421]: Removed session 12. Jan 29 11:05:43.417433 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 50928 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:05:43.418878 sshd-session[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:43.424746 systemd-logind[1421]: New session 13 of user core. Jan 29 11:05:43.434895 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:05:43.559803 sshd[4004]: Connection closed by 10.0.0.1 port 50928 Jan 29 11:05:43.560170 sshd-session[4002]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:43.563470 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:50928.service: Deactivated successfully. Jan 29 11:05:43.565474 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:05:43.566295 systemd-logind[1421]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:05:43.567205 systemd-logind[1421]: Removed session 13. Jan 29 11:05:48.571341 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:50932.service - OpenSSH per-connection server daemon (10.0.0.1:50932). Jan 29 11:05:48.615056 sshd[4017]: Accepted publickey for core from 10.0.0.1 port 50932 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:05:48.616373 sshd-session[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:48.620217 systemd-logind[1421]: New session 14 of user core. Jan 29 11:05:48.626966 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:05:48.734829 sshd[4019]: Connection closed by 10.0.0.1 port 50932 Jan 29 11:05:48.735170 sshd-session[4017]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:48.738271 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:50932.service: Deactivated successfully. Jan 29 11:05:48.741456 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:05:48.742133 systemd-logind[1421]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:05:48.743015 systemd-logind[1421]: Removed session 14. Jan 29 11:05:53.746223 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:44016.service - OpenSSH per-connection server daemon (10.0.0.1:44016). Jan 29 11:05:53.790049 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 44016 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:05:53.791232 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:53.794695 systemd-logind[1421]: New session 15 of user core. Jan 29 11:05:53.806841 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:05:53.913194 sshd[4033]: Connection closed by 10.0.0.1 port 44016 Jan 29 11:05:53.913744 sshd-session[4031]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:53.924065 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:44016.service: Deactivated successfully. Jan 29 11:05:53.925442 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:05:53.926606 systemd-logind[1421]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:05:53.927917 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:44028.service - OpenSSH per-connection server daemon (10.0.0.1:44028). Jan 29 11:05:53.928555 systemd-logind[1421]: Removed session 15. Jan 29 11:05:53.970342 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 44028 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:05:53.971459 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:53.974950 systemd-logind[1421]: New session 16 of user core. Jan 29 11:05:53.983827 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:05:54.179837 sshd[4047]: Connection closed by 10.0.0.1 port 44028 Jan 29 11:05:54.180641 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:54.187144 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:44028.service: Deactivated successfully. Jan 29 11:05:54.188943 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:05:54.190183 systemd-logind[1421]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:05:54.205225 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:44038.service - OpenSSH per-connection server daemon (10.0.0.1:44038). Jan 29 11:05:54.206194 systemd-logind[1421]: Removed session 16. Jan 29 11:05:54.248374 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 44038 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:05:54.249520 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:54.253199 systemd-logind[1421]: New session 17 of user core. Jan 29 11:05:54.260910 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:05:55.422135 sshd[4059]: Connection closed by 10.0.0.1 port 44038 Jan 29 11:05:55.422620 sshd-session[4057]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:55.431174 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:44038.service: Deactivated successfully. Jan 29 11:05:55.435436 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:05:55.437982 systemd-logind[1421]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:05:55.444113 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:44040.service - OpenSSH per-connection server daemon (10.0.0.1:44040). Jan 29 11:05:55.446631 systemd-logind[1421]: Removed session 17. Jan 29 11:05:55.487728 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 44040 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:05:55.489322 sshd-session[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:55.493762 systemd-logind[1421]: New session 18 of user core. Jan 29 11:05:55.501826 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:05:55.732854 sshd[4080]: Connection closed by 10.0.0.1 port 44040 Jan 29 11:05:55.734866 sshd-session[4078]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:55.745361 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:44040.service: Deactivated successfully. Jan 29 11:05:55.746857 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:05:55.748292 systemd-logind[1421]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:05:55.749507 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:44054.service - OpenSSH per-connection server daemon (10.0.0.1:44054). Jan 29 11:05:55.750369 systemd-logind[1421]: Removed session 18. Jan 29 11:05:55.794864 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 44054 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:05:55.796263 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:55.800290 systemd-logind[1421]: New session 19 of user core. Jan 29 11:05:55.805886 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:05:55.919581 sshd[4093]: Connection closed by 10.0.0.1 port 44054 Jan 29 11:05:55.919951 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:55.922544 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:44054.service: Deactivated successfully. Jan 29 11:05:55.924253 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:05:55.926519 systemd-logind[1421]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:05:55.927564 systemd-logind[1421]: Removed session 19. Jan 29 11:06:00.934773 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:44064.service - OpenSSH per-connection server daemon (10.0.0.1:44064). Jan 29 11:06:00.979538 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 44064 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:06:00.981199 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:06:00.985452 systemd-logind[1421]: New session 20 of user core. Jan 29 11:06:00.999852 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:06:01.107947 sshd[4113]: Connection closed by 10.0.0.1 port 44064 Jan 29 11:06:01.108284 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Jan 29 11:06:01.111416 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:44064.service: Deactivated successfully. Jan 29 11:06:01.113022 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:06:01.114874 systemd-logind[1421]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:06:01.115708 systemd-logind[1421]: Removed session 20. Jan 29 11:06:06.120629 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:52184.service - OpenSSH per-connection server daemon (10.0.0.1:52184). Jan 29 11:06:06.165985 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 52184 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:06:06.167307 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:06:06.170909 systemd-logind[1421]: New session 21 of user core. Jan 29 11:06:06.181849 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:06:06.296462 sshd[4127]: Connection closed by 10.0.0.1 port 52184 Jan 29 11:06:06.297058 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Jan 29 11:06:06.300323 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:52184.service: Deactivated successfully. Jan 29 11:06:06.301900 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:06:06.303231 systemd-logind[1421]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:06:06.304309 systemd-logind[1421]: Removed session 21. Jan 29 11:06:11.307283 systemd[1]: Started sshd@21-10.0.0.81:22-10.0.0.1:52196.service - OpenSSH per-connection server daemon (10.0.0.1:52196). Jan 29 11:06:11.350909 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 52196 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:06:11.352240 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:06:11.356292 systemd-logind[1421]: New session 22 of user core. Jan 29 11:06:11.371873 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:06:11.479999 sshd[4144]: Connection closed by 10.0.0.1 port 52196 Jan 29 11:06:11.480497 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Jan 29 11:06:11.493271 systemd[1]: sshd@21-10.0.0.81:22-10.0.0.1:52196.service: Deactivated successfully. Jan 29 11:06:11.494977 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:06:11.496247 systemd-logind[1421]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:06:11.497567 systemd[1]: Started sshd@22-10.0.0.81:22-10.0.0.1:52202.service - OpenSSH per-connection server daemon (10.0.0.1:52202). Jan 29 11:06:11.498454 systemd-logind[1421]: Removed session 22. Jan 29 11:06:11.550838 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 52202 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:06:11.551181 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:06:11.555828 systemd-logind[1421]: New session 23 of user core. Jan 29 11:06:11.561867 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:06:14.234773 containerd[1442]: time="2025-01-29T11:06:14.234731816Z" level=info msg="StopContainer for \"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af\" with timeout 30 (s)" Jan 29 11:06:14.235701 containerd[1442]: time="2025-01-29T11:06:14.235492687Z" level=info msg="Stop container \"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af\" with signal terminated" Jan 29 11:06:14.245232 systemd[1]: cri-containerd-649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af.scope: Deactivated successfully. Jan 29 11:06:14.271837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af-rootfs.mount: Deactivated successfully. Jan 29 11:06:14.279572 containerd[1442]: time="2025-01-29T11:06:14.279496403Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:06:14.284329 containerd[1442]: time="2025-01-29T11:06:14.284146392Z" level=info msg="StopContainer for \"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c\" with timeout 2 (s)" Jan 29 11:06:14.284561 containerd[1442]: time="2025-01-29T11:06:14.284540467Z" level=info msg="Stop container \"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c\" with signal terminated" Jan 29 11:06:14.290246 systemd-networkd[1372]: lxc_health: Link DOWN Jan 29 11:06:14.290252 systemd-networkd[1372]: lxc_health: Lost carrier Jan 29 11:06:14.291708 containerd[1442]: time="2025-01-29T11:06:14.291605870Z" level=info msg="shim disconnected" id=649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af namespace=k8s.io Jan 29 11:06:14.291708 containerd[1442]: time="2025-01-29T11:06:14.291655949Z" level=warning msg="cleaning up after shim disconnected" id=649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af namespace=k8s.io Jan 29 11:06:14.291708 containerd[1442]: time="2025-01-29T11:06:14.291664109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:14.317378 systemd[1]: cri-containerd-0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c.scope: Deactivated successfully. Jan 29 11:06:14.317806 systemd[1]: cri-containerd-0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c.scope: Consumed 6.706s CPU time. Jan 29 11:06:14.334308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c-rootfs.mount: Deactivated successfully. Jan 29 11:06:14.337500 containerd[1442]: time="2025-01-29T11:06:14.337460525Z" level=info msg="StopContainer for \"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af\" returns successfully" Jan 29 11:06:14.340493 containerd[1442]: time="2025-01-29T11:06:14.340437092Z" level=info msg="StopPodSandbox for \"3b30f2fd90c9521c36e1f4a3b8c9c531a0a5785cc7cd82124ab3ada9aa3832d0\"" Jan 29 11:06:14.341355 containerd[1442]: time="2025-01-29T11:06:14.341312642Z" level=info msg="shim disconnected" id=0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c namespace=k8s.io Jan 29 11:06:14.341355 containerd[1442]: time="2025-01-29T11:06:14.341353042Z" level=warning msg="cleaning up after shim disconnected" id=0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c namespace=k8s.io Jan 29 11:06:14.341355 containerd[1442]: time="2025-01-29T11:06:14.341361282Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:14.343301 containerd[1442]: time="2025-01-29T11:06:14.343258341Z" level=info msg="Container to stop \"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:06:14.346891 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b30f2fd90c9521c36e1f4a3b8c9c531a0a5785cc7cd82124ab3ada9aa3832d0-shm.mount: Deactivated successfully. Jan 29 11:06:14.348585 systemd[1]: cri-containerd-3b30f2fd90c9521c36e1f4a3b8c9c531a0a5785cc7cd82124ab3ada9aa3832d0.scope: Deactivated successfully. Jan 29 11:06:14.368704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b30f2fd90c9521c36e1f4a3b8c9c531a0a5785cc7cd82124ab3ada9aa3832d0-rootfs.mount: Deactivated successfully. Jan 29 11:06:14.369427 containerd[1442]: time="2025-01-29T11:06:14.368760700Z" level=info msg="StopContainer for \"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c\" returns successfully" Jan 29 11:06:14.369427 containerd[1442]: time="2025-01-29T11:06:14.369278655Z" level=info msg="StopPodSandbox for \"f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0\"" Jan 29 11:06:14.369427 containerd[1442]: time="2025-01-29T11:06:14.369313134Z" level=info msg="Container to stop \"57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:06:14.369427 containerd[1442]: time="2025-01-29T11:06:14.369323574Z" level=info msg="Container to stop \"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:06:14.369427 containerd[1442]: time="2025-01-29T11:06:14.369332174Z" level=info msg="Container to stop \"067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:06:14.369427 containerd[1442]: time="2025-01-29T11:06:14.369341134Z" level=info msg="Container to stop \"6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:06:14.369427 containerd[1442]: time="2025-01-29T11:06:14.369350134Z" level=info msg="Container to stop \"f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:06:14.371834 containerd[1442]: time="2025-01-29T11:06:14.371618189Z" level=info msg="shim disconnected" id=3b30f2fd90c9521c36e1f4a3b8c9c531a0a5785cc7cd82124ab3ada9aa3832d0 namespace=k8s.io Jan 29 11:06:14.371834 containerd[1442]: time="2025-01-29T11:06:14.371659508Z" level=warning msg="cleaning up after shim disconnected" id=3b30f2fd90c9521c36e1f4a3b8c9c531a0a5785cc7cd82124ab3ada9aa3832d0 namespace=k8s.io Jan 29 11:06:14.371834 containerd[1442]: time="2025-01-29T11:06:14.371668468Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:14.375064 systemd[1]: cri-containerd-f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0.scope: Deactivated successfully. Jan 29 11:06:14.386944 containerd[1442]: time="2025-01-29T11:06:14.386846581Z" level=info msg="TearDown network for sandbox \"3b30f2fd90c9521c36e1f4a3b8c9c531a0a5785cc7cd82124ab3ada9aa3832d0\" successfully" Jan 29 11:06:14.386944 containerd[1442]: time="2025-01-29T11:06:14.386878021Z" level=info msg="StopPodSandbox for \"3b30f2fd90c9521c36e1f4a3b8c9c531a0a5785cc7cd82124ab3ada9aa3832d0\" returns successfully" Jan 29 11:06:14.403554 containerd[1442]: time="2025-01-29T11:06:14.403253481Z" level=info msg="shim disconnected" id=f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0 namespace=k8s.io Jan 29 11:06:14.403554 containerd[1442]: time="2025-01-29T11:06:14.403308720Z" level=warning msg="cleaning up after shim disconnected" id=f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0 namespace=k8s.io Jan 29 11:06:14.403554 containerd[1442]: time="2025-01-29T11:06:14.403317240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:14.414982 containerd[1442]: time="2025-01-29T11:06:14.414931152Z" level=info msg="TearDown network for sandbox \"f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0\" successfully" Jan 29 11:06:14.414982 containerd[1442]: time="2025-01-29T11:06:14.414968392Z" level=info msg="StopPodSandbox for \"f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0\" returns successfully" Jan 29 11:06:14.585196 kubelet[2515]: I0129 11:06:14.585052 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-etc-cni-netd\") pod \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " Jan 29 11:06:14.585196 kubelet[2515]: I0129 11:06:14.585127 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-host-proc-sys-net\") pod \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " Jan 29 11:06:14.585196 kubelet[2515]: I0129 11:06:14.585145 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cilium-cgroup\") pod \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " Jan 29 11:06:14.585196 kubelet[2515]: I0129 11:06:14.585159 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cilium-run\") pod \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " Jan 29 11:06:14.585196 kubelet[2515]: I0129 11:06:14.585174 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-bpf-maps\") pod \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " Jan 29 11:06:14.585622 kubelet[2515]: I0129 11:06:14.585253 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-lib-modules\") pod \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " Jan 29 11:06:14.585622 kubelet[2515]: I0129 11:06:14.585272 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-host-proc-sys-kernel\") pod \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " Jan 29 11:06:14.585622 kubelet[2515]: I0129 11:06:14.585297 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-hubble-tls\") pod \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " Jan 29 11:06:14.585622 kubelet[2515]: I0129 11:06:14.585315 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cilium-config-path\") pod \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " Jan 29 11:06:14.585622 kubelet[2515]: I0129 11:06:14.585331 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c41a563-e669-47f2-849f-60a54a4d2387-cilium-config-path\") pod \"4c41a563-e669-47f2-849f-60a54a4d2387\" (UID: \"4c41a563-e669-47f2-849f-60a54a4d2387\") " Jan 29 11:06:14.586292 kubelet[2515]: I0129 11:06:14.586098 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mszpr\" (UniqueName: \"kubernetes.io/projected/4c41a563-e669-47f2-849f-60a54a4d2387-kube-api-access-mszpr\") pod \"4c41a563-e669-47f2-849f-60a54a4d2387\" (UID: \"4c41a563-e669-47f2-849f-60a54a4d2387\") " Jan 29 11:06:14.586292 kubelet[2515]: I0129 11:06:14.586126 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-xtables-lock\") pod \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " Jan 29 11:06:14.586292 kubelet[2515]: I0129 11:06:14.586147 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-clustermesh-secrets\") pod \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " Jan 29 11:06:14.586292 kubelet[2515]: I0129 11:06:14.586166 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cni-path\") pod \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " Jan 29 11:06:14.586292 kubelet[2515]: I0129 11:06:14.586181 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-hostproc\") pod \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " Jan 29 11:06:14.586292 kubelet[2515]: I0129 11:06:14.586199 2515 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ncgj\" (UniqueName: \"kubernetes.io/projected/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-kube-api-access-6ncgj\") pod \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\" (UID: \"d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d\") " Jan 29 11:06:14.588362 kubelet[2515]: I0129 11:06:14.588159 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" (UID: "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:06:14.588362 kubelet[2515]: I0129 11:06:14.588161 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" (UID: "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:06:14.588362 kubelet[2515]: I0129 11:06:14.588251 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" (UID: "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:06:14.591286 kubelet[2515]: I0129 11:06:14.591252 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" (UID: "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:06:14.591435 kubelet[2515]: I0129 11:06:14.591314 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cni-path" (OuterVolumeSpecName: "cni-path") pod "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" (UID: "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:06:14.593801 kubelet[2515]: I0129 11:06:14.593495 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-kube-api-access-6ncgj" (OuterVolumeSpecName: "kube-api-access-6ncgj") pod "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" (UID: "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d"). InnerVolumeSpecName "kube-api-access-6ncgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:06:14.593801 kubelet[2515]: I0129 11:06:14.593561 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-hostproc" (OuterVolumeSpecName: "hostproc") pod "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" (UID: "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:06:14.593801 kubelet[2515]: I0129 11:06:14.593583 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" (UID: "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:06:14.593801 kubelet[2515]: I0129 11:06:14.593600 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" (UID: "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:06:14.593801 kubelet[2515]: I0129 11:06:14.593614 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" (UID: "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:06:14.593973 kubelet[2515]: I0129 11:06:14.593631 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" (UID: "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:06:14.593973 kubelet[2515]: I0129 11:06:14.593644 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" (UID: "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:06:14.595478 kubelet[2515]: I0129 11:06:14.595453 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" (UID: "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:06:14.595576 kubelet[2515]: I0129 11:06:14.595540 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c41a563-e669-47f2-849f-60a54a4d2387-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4c41a563-e669-47f2-849f-60a54a4d2387" (UID: "4c41a563-e669-47f2-849f-60a54a4d2387"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:06:14.595617 kubelet[2515]: I0129 11:06:14.595568 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" (UID: "d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:06:14.595886 kubelet[2515]: I0129 11:06:14.595843 2515 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c41a563-e669-47f2-849f-60a54a4d2387-kube-api-access-mszpr" (OuterVolumeSpecName: "kube-api-access-mszpr") pod "4c41a563-e669-47f2-849f-60a54a4d2387" (UID: "4c41a563-e669-47f2-849f-60a54a4d2387"). InnerVolumeSpecName "kube-api-access-mszpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:06:14.686873 kubelet[2515]: I0129 11:06:14.686822 2515 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.686873 kubelet[2515]: I0129 11:06:14.686870 2515 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.687027 kubelet[2515]: I0129 11:06:14.686887 2515 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6ncgj\" (UniqueName: \"kubernetes.io/projected/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-kube-api-access-6ncgj\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.687027 kubelet[2515]: I0129 11:06:14.686904 2515 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.687027 kubelet[2515]: I0129 11:06:14.686912 2515 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.687027 kubelet[2515]: I0129 11:06:14.686919 2515 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.687027 kubelet[2515]: I0129 11:06:14.686926 2515 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.687027 kubelet[2515]: I0129 11:06:14.686933 2515 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.687027 kubelet[2515]: I0129 11:06:14.686942 2515 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.687027 kubelet[2515]: I0129 11:06:14.686949 2515 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.687193 kubelet[2515]: I0129 11:06:14.686956 2515 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.687193 kubelet[2515]: I0129 11:06:14.686966 2515 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.687193 kubelet[2515]: I0129 11:06:14.686973 2515 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c41a563-e669-47f2-849f-60a54a4d2387-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.687193 kubelet[2515]: I0129 11:06:14.686980 2515 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mszpr\" (UniqueName: \"kubernetes.io/projected/4c41a563-e669-47f2-849f-60a54a4d2387-kube-api-access-mszpr\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.687193 kubelet[2515]: I0129 11:06:14.686990 2515 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.687193 kubelet[2515]: I0129 11:06:14.686997 2515 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 29 11:06:14.824442 systemd[1]: Removed slice kubepods-besteffort-pod4c41a563_e669_47f2_849f_60a54a4d2387.slice - libcontainer container kubepods-besteffort-pod4c41a563_e669_47f2_849f_60a54a4d2387.slice. Jan 29 11:06:14.827712 systemd[1]: Removed slice kubepods-burstable-podd7c5cf1b_71fe_424b_9a5a_e4fb37bd520d.slice - libcontainer container kubepods-burstable-podd7c5cf1b_71fe_424b_9a5a_e4fb37bd520d.slice. Jan 29 11:06:14.827799 systemd[1]: kubepods-burstable-podd7c5cf1b_71fe_424b_9a5a_e4fb37bd520d.slice: Consumed 6.867s CPU time. Jan 29 11:06:15.028326 kubelet[2515]: I0129 11:06:15.028268 2515 scope.go:117] "RemoveContainer" containerID="649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af" Jan 29 11:06:15.030713 containerd[1442]: time="2025-01-29T11:06:15.030497927Z" level=info msg="RemoveContainer for \"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af\"" Jan 29 11:06:15.036504 containerd[1442]: time="2025-01-29T11:06:15.036382148Z" level=info msg="RemoveContainer for \"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af\" returns successfully" Jan 29 11:06:15.036769 kubelet[2515]: I0129 11:06:15.036739 2515 scope.go:117] "RemoveContainer" containerID="649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af" Jan 29 11:06:15.037391 containerd[1442]: time="2025-01-29T11:06:15.037355938Z" level=error msg="ContainerStatus for \"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af\": not found" Jan 29 11:06:15.044993 kubelet[2515]: E0129 11:06:15.044780 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af\": not found" containerID="649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af" Jan 29 11:06:15.044993 kubelet[2515]: I0129 11:06:15.044831 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af"} err="failed to get container status \"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af\": rpc error: code = NotFound desc = an error occurred when try to find container \"649da1b019ab5abf02a23c0be9facd69ce54484c3bba4e5129e9894eef5977af\": not found" Jan 29 11:06:15.044993 kubelet[2515]: I0129 11:06:15.044912 2515 scope.go:117] "RemoveContainer" containerID="0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c" Jan 29 11:06:15.046419 containerd[1442]: time="2025-01-29T11:06:15.046355249Z" level=info msg="RemoveContainer for \"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c\"" Jan 29 11:06:15.059072 containerd[1442]: time="2025-01-29T11:06:15.059019802Z" level=info msg="RemoveContainer for \"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c\" returns successfully" Jan 29 11:06:15.059390 kubelet[2515]: I0129 11:06:15.059374 2515 scope.go:117] "RemoveContainer" containerID="f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2" Jan 29 11:06:15.060372 containerd[1442]: time="2025-01-29T11:06:15.060344109Z" level=info msg="RemoveContainer for \"f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2\"" Jan 29 11:06:15.063067 containerd[1442]: time="2025-01-29T11:06:15.063031643Z" level=info msg="RemoveContainer for \"f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2\" returns successfully" Jan 29 11:06:15.063245 kubelet[2515]: I0129 11:06:15.063229 2515 scope.go:117] "RemoveContainer" containerID="6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d" Jan 29 11:06:15.064242 containerd[1442]: time="2025-01-29T11:06:15.064216551Z" level=info msg="RemoveContainer for \"6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d\"" Jan 29 11:06:15.066533 containerd[1442]: time="2025-01-29T11:06:15.066492168Z" level=info msg="RemoveContainer for \"6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d\" returns successfully" Jan 29 11:06:15.066697 kubelet[2515]: I0129 11:06:15.066651 2515 scope.go:117] "RemoveContainer" containerID="067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5" Jan 29 11:06:15.067697 containerd[1442]: time="2025-01-29T11:06:15.067656276Z" level=info msg="RemoveContainer for \"067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5\"" Jan 29 11:06:15.072352 containerd[1442]: time="2025-01-29T11:06:15.070047253Z" level=info msg="RemoveContainer for \"067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5\" returns successfully" Jan 29 11:06:15.072501 kubelet[2515]: I0129 11:06:15.072450 2515 scope.go:117] "RemoveContainer" containerID="57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba" Jan 29 11:06:15.074181 containerd[1442]: time="2025-01-29T11:06:15.074154812Z" level=info msg="RemoveContainer for \"57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba\"" Jan 29 11:06:15.076277 containerd[1442]: time="2025-01-29T11:06:15.076247911Z" level=info msg="RemoveContainer for \"57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba\" returns successfully" Jan 29 11:06:15.076546 kubelet[2515]: I0129 11:06:15.076497 2515 scope.go:117] "RemoveContainer" containerID="0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c" Jan 29 11:06:15.076816 containerd[1442]: time="2025-01-29T11:06:15.076777106Z" level=error msg="ContainerStatus for \"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c\": not found" Jan 29 11:06:15.076938 kubelet[2515]: E0129 11:06:15.076912 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c\": not found" containerID="0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c" Jan 29 11:06:15.076970 kubelet[2515]: I0129 11:06:15.076946 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c"} err="failed to get container status \"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d1224b3861e1f7535e6f518b6c872ffa7b7302f613f74c47f53c15b4273525c\": not found" Jan 29 11:06:15.076970 kubelet[2515]: I0129 11:06:15.076968 2515 scope.go:117] "RemoveContainer" containerID="f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2" Jan 29 11:06:15.077218 containerd[1442]: time="2025-01-29T11:06:15.077139142Z" level=error msg="ContainerStatus for \"f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2\": not found" Jan 29 11:06:15.077289 kubelet[2515]: E0129 11:06:15.077265 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2\": not found" containerID="f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2" Jan 29 11:06:15.077327 kubelet[2515]: I0129 11:06:15.077291 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2"} err="failed to get container status \"f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"f878cc58f4f035c61af350310c6b0f9de31116418fcdf97c1ad43cb4531921e2\": not found" Jan 29 11:06:15.077327 kubelet[2515]: I0129 11:06:15.077305 2515 scope.go:117] "RemoveContainer" containerID="6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d" Jan 29 11:06:15.077486 containerd[1442]: time="2025-01-29T11:06:15.077454419Z" level=error msg="ContainerStatus for \"6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d\": not found" Jan 29 11:06:15.077732 kubelet[2515]: E0129 11:06:15.077602 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d\": not found" containerID="6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d" Jan 29 11:06:15.077732 kubelet[2515]: I0129 11:06:15.077629 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d"} err="failed to get container status \"6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6631baa141920a0fb1398feb0639ce778e4556e92894bc9f98993dc0a31f661d\": not found" Jan 29 11:06:15.077732 kubelet[2515]: I0129 11:06:15.077648 2515 scope.go:117] "RemoveContainer" containerID="067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5" Jan 29 11:06:15.078047 containerd[1442]: time="2025-01-29T11:06:15.077978454Z" level=error msg="ContainerStatus for \"067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5\": not found" Jan 29 11:06:15.078124 kubelet[2515]: E0129 11:06:15.078097 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5\": not found" containerID="067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5" Jan 29 11:06:15.078159 kubelet[2515]: I0129 11:06:15.078127 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5"} err="failed to get container status \"067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"067be88aefb21bc59cd263cdb919be336eb55be56db00c9969612801d9f772e5\": not found" Jan 29 11:06:15.078159 kubelet[2515]: I0129 11:06:15.078144 2515 scope.go:117] "RemoveContainer" containerID="57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba" Jan 29 11:06:15.078435 containerd[1442]: time="2025-01-29T11:06:15.078407209Z" level=error msg="ContainerStatus for \"57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba\": not found" Jan 29 11:06:15.078534 kubelet[2515]: E0129 11:06:15.078517 2515 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba\": not found" containerID="57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba" Jan 29 11:06:15.078568 kubelet[2515]: I0129 11:06:15.078538 2515 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba"} err="failed to get container status \"57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"57148c954673f48dcf4b2ca78c0b2844436fd75921d50ed2e283796f05b679ba\": not found" Jan 29 11:06:15.254133 systemd[1]: var-lib-kubelet-pods-4c41a563\x2de669\x2d47f2\x2d849f\x2d60a54a4d2387-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmszpr.mount: Deactivated successfully. Jan 29 11:06:15.254230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0-rootfs.mount: Deactivated successfully. Jan 29 11:06:15.254286 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8c6899bd1be693246f0fd971bfff3dcc011804fd692301bd1819bbc120eddc0-shm.mount: Deactivated successfully. Jan 29 11:06:15.254336 systemd[1]: var-lib-kubelet-pods-d7c5cf1b\x2d71fe\x2d424b\x2d9a5a\x2de4fb37bd520d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6ncgj.mount: Deactivated successfully. Jan 29 11:06:15.254387 systemd[1]: var-lib-kubelet-pods-d7c5cf1b\x2d71fe\x2d424b\x2d9a5a\x2de4fb37bd520d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:06:15.254432 systemd[1]: var-lib-kubelet-pods-d7c5cf1b\x2d71fe\x2d424b\x2d9a5a\x2de4fb37bd520d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:06:15.889668 kubelet[2515]: E0129 11:06:15.889608 2515 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:06:16.197843 sshd[4158]: Connection closed by 10.0.0.1 port 52202 Jan 29 11:06:16.198433 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Jan 29 11:06:16.209256 systemd[1]: sshd@22-10.0.0.81:22-10.0.0.1:52202.service: Deactivated successfully. Jan 29 11:06:16.211092 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:06:16.211282 systemd[1]: session-23.scope: Consumed 2.006s CPU time. Jan 29 11:06:16.212363 systemd-logind[1421]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:06:16.213921 systemd[1]: Started sshd@23-10.0.0.81:22-10.0.0.1:38070.service - OpenSSH per-connection server daemon (10.0.0.1:38070). Jan 29 11:06:16.214647 systemd-logind[1421]: Removed session 23. Jan 29 11:06:16.257326 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 38070 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:06:16.258606 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:06:16.262790 systemd-logind[1421]: New session 24 of user core. Jan 29 11:06:16.272903 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:06:16.821669 kubelet[2515]: I0129 11:06:16.820801 2515 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c41a563-e669-47f2-849f-60a54a4d2387" path="/var/lib/kubelet/pods/4c41a563-e669-47f2-849f-60a54a4d2387/volumes" Jan 29 11:06:16.821669 kubelet[2515]: I0129 11:06:16.821190 2515 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" path="/var/lib/kubelet/pods/d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d/volumes" Jan 29 11:06:17.236819 sshd[4320]: Connection closed by 10.0.0.1 port 38070 Jan 29 11:06:17.238901 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Jan 29 11:06:17.250860 systemd[1]: sshd@23-10.0.0.81:22-10.0.0.1:38070.service: Deactivated successfully. Jan 29 11:06:17.255537 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:06:17.260328 systemd-logind[1421]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:06:17.263184 kubelet[2515]: E0129 11:06:17.263135 2515 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" containerName="mount-cgroup" Jan 29 11:06:17.263184 kubelet[2515]: E0129 11:06:17.263165 2515 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" containerName="apply-sysctl-overwrites" Jan 29 11:06:17.263184 kubelet[2515]: E0129 11:06:17.263172 2515 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c41a563-e669-47f2-849f-60a54a4d2387" containerName="cilium-operator" Jan 29 11:06:17.263184 kubelet[2515]: E0129 11:06:17.263179 2515 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" containerName="mount-bpf-fs" Jan 29 11:06:17.263184 kubelet[2515]: E0129 11:06:17.263185 2515 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" containerName="clean-cilium-state" Jan 29 11:06:17.263184 kubelet[2515]: E0129 11:06:17.263191 2515 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" containerName="cilium-agent" Jan 29 11:06:17.263676 kubelet[2515]: I0129 11:06:17.263216 2515 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c5cf1b-71fe-424b-9a5a-e4fb37bd520d" containerName="cilium-agent" Jan 29 11:06:17.263676 kubelet[2515]: I0129 11:06:17.263224 2515 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c41a563-e669-47f2-849f-60a54a4d2387" containerName="cilium-operator" Jan 29 11:06:17.273118 systemd[1]: Started sshd@24-10.0.0.81:22-10.0.0.1:38072.service - OpenSSH per-connection server daemon (10.0.0.1:38072). Jan 29 11:06:17.278135 systemd-logind[1421]: Removed session 24. Jan 29 11:06:17.284505 systemd[1]: Created slice kubepods-burstable-pod39092b8e_67cf_4d5d_b073_89f450c7be70.slice - libcontainer container kubepods-burstable-pod39092b8e_67cf_4d5d_b073_89f450c7be70.slice. Jan 29 11:06:17.303976 kubelet[2515]: I0129 11:06:17.303929 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39092b8e-67cf-4d5d-b073-89f450c7be70-clustermesh-secrets\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.303976 kubelet[2515]: I0129 11:06:17.303972 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39092b8e-67cf-4d5d-b073-89f450c7be70-cni-path\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.304138 kubelet[2515]: I0129 11:06:17.303993 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39092b8e-67cf-4d5d-b073-89f450c7be70-bpf-maps\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.304138 kubelet[2515]: I0129 11:06:17.304009 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39092b8e-67cf-4d5d-b073-89f450c7be70-etc-cni-netd\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.304138 kubelet[2515]: I0129 11:06:17.304025 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39092b8e-67cf-4d5d-b073-89f450c7be70-xtables-lock\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.304138 kubelet[2515]: I0129 11:06:17.304041 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/39092b8e-67cf-4d5d-b073-89f450c7be70-cilium-ipsec-secrets\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.304138 kubelet[2515]: I0129 11:06:17.304055 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39092b8e-67cf-4d5d-b073-89f450c7be70-hubble-tls\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.304138 kubelet[2515]: I0129 11:06:17.304070 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39092b8e-67cf-4d5d-b073-89f450c7be70-cilium-run\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.304269 kubelet[2515]: I0129 11:06:17.304084 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39092b8e-67cf-4d5d-b073-89f450c7be70-hostproc\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.304269 kubelet[2515]: I0129 11:06:17.304110 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39092b8e-67cf-4d5d-b073-89f450c7be70-cilium-cgroup\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.304269 kubelet[2515]: I0129 11:06:17.304127 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hjjs\" (UniqueName: \"kubernetes.io/projected/39092b8e-67cf-4d5d-b073-89f450c7be70-kube-api-access-6hjjs\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.304269 kubelet[2515]: I0129 11:06:17.304143 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39092b8e-67cf-4d5d-b073-89f450c7be70-lib-modules\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.304269 kubelet[2515]: I0129 11:06:17.304160 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39092b8e-67cf-4d5d-b073-89f450c7be70-cilium-config-path\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.304269 kubelet[2515]: I0129 11:06:17.304174 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39092b8e-67cf-4d5d-b073-89f450c7be70-host-proc-sys-net\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.304386 kubelet[2515]: I0129 11:06:17.304189 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39092b8e-67cf-4d5d-b073-89f450c7be70-host-proc-sys-kernel\") pod \"cilium-6sbnt\" (UID: \"39092b8e-67cf-4d5d-b073-89f450c7be70\") " pod="kube-system/cilium-6sbnt" Jan 29 11:06:17.324062 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 38072 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:06:17.325390 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:06:17.329218 systemd-logind[1421]: New session 25 of user core. Jan 29 11:06:17.349882 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:06:17.403745 sshd[4333]: Connection closed by 10.0.0.1 port 38072 Jan 29 11:06:17.404741 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Jan 29 11:06:17.422631 systemd[1]: sshd@24-10.0.0.81:22-10.0.0.1:38072.service: Deactivated successfully. Jan 29 11:06:17.424429 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:06:17.425910 systemd-logind[1421]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:06:17.430984 systemd[1]: Started sshd@25-10.0.0.81:22-10.0.0.1:38088.service - OpenSSH per-connection server daemon (10.0.0.1:38088). Jan 29 11:06:17.431944 systemd-logind[1421]: Removed session 25. Jan 29 11:06:17.471594 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 38088 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:06:17.472973 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:06:17.477013 systemd-logind[1421]: New session 26 of user core. Jan 29 11:06:17.500899 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:06:17.588547 kubelet[2515]: E0129 11:06:17.588511 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:06:17.590617 containerd[1442]: time="2025-01-29T11:06:17.590092302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6sbnt,Uid:39092b8e-67cf-4d5d-b073-89f450c7be70,Namespace:kube-system,Attempt:0,}" Jan 29 11:06:17.615038 containerd[1442]: time="2025-01-29T11:06:17.613614675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:06:17.615038 containerd[1442]: time="2025-01-29T11:06:17.613677754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:06:17.615038 containerd[1442]: time="2025-01-29T11:06:17.613709354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:06:17.615038 containerd[1442]: time="2025-01-29T11:06:17.613816073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:06:17.635923 systemd[1]: Started cri-containerd-3684bd3e155a1874d2c9e20575b3cdb5c729088da07a46c409d20f5d86a8b209.scope - libcontainer container 3684bd3e155a1874d2c9e20575b3cdb5c729088da07a46c409d20f5d86a8b209. Jan 29 11:06:17.653422 containerd[1442]: time="2025-01-29T11:06:17.653340558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6sbnt,Uid:39092b8e-67cf-4d5d-b073-89f450c7be70,Namespace:kube-system,Attempt:0,} returns sandbox id \"3684bd3e155a1874d2c9e20575b3cdb5c729088da07a46c409d20f5d86a8b209\"" Jan 29 11:06:17.654600 kubelet[2515]: E0129 11:06:17.654253 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:06:17.658259 containerd[1442]: time="2025-01-29T11:06:17.658210000Z" level=info msg="CreateContainer within sandbox \"3684bd3e155a1874d2c9e20575b3cdb5c729088da07a46c409d20f5d86a8b209\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:06:17.672119 containerd[1442]: time="2025-01-29T11:06:17.672073169Z" level=info msg="CreateContainer within sandbox \"3684bd3e155a1874d2c9e20575b3cdb5c729088da07a46c409d20f5d86a8b209\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1abfb541bc33ab8243f1d425fb9386cd0ae9a8258f1b05342b9053554f28a5a\"" Jan 29 11:06:17.672901 containerd[1442]: time="2025-01-29T11:06:17.672716444Z" level=info msg="StartContainer for \"f1abfb541bc33ab8243f1d425fb9386cd0ae9a8258f1b05342b9053554f28a5a\"" Jan 29 11:06:17.700880 systemd[1]: Started cri-containerd-f1abfb541bc33ab8243f1d425fb9386cd0ae9a8258f1b05342b9053554f28a5a.scope - libcontainer container f1abfb541bc33ab8243f1d425fb9386cd0ae9a8258f1b05342b9053554f28a5a. Jan 29 11:06:17.723117 containerd[1442]: time="2025-01-29T11:06:17.723076203Z" level=info msg="StartContainer for \"f1abfb541bc33ab8243f1d425fb9386cd0ae9a8258f1b05342b9053554f28a5a\" returns successfully" Jan 29 11:06:17.735184 systemd[1]: cri-containerd-f1abfb541bc33ab8243f1d425fb9386cd0ae9a8258f1b05342b9053554f28a5a.scope: Deactivated successfully. Jan 29 11:06:17.763235 containerd[1442]: time="2025-01-29T11:06:17.763095644Z" level=info msg="shim disconnected" id=f1abfb541bc33ab8243f1d425fb9386cd0ae9a8258f1b05342b9053554f28a5a namespace=k8s.io Jan 29 11:06:17.763235 containerd[1442]: time="2025-01-29T11:06:17.763149564Z" level=warning msg="cleaning up after shim disconnected" id=f1abfb541bc33ab8243f1d425fb9386cd0ae9a8258f1b05342b9053554f28a5a namespace=k8s.io Jan 29 11:06:17.763235 containerd[1442]: time="2025-01-29T11:06:17.763158204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:17.817029 kubelet[2515]: E0129 11:06:17.816986 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:06:18.039614 kubelet[2515]: E0129 11:06:18.038939 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:06:18.044312 containerd[1442]: time="2025-01-29T11:06:18.044116327Z" level=info msg="CreateContainer within sandbox \"3684bd3e155a1874d2c9e20575b3cdb5c729088da07a46c409d20f5d86a8b209\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:06:18.053512 containerd[1442]: time="2025-01-29T11:06:18.053473661Z" level=info msg="CreateContainer within sandbox \"3684bd3e155a1874d2c9e20575b3cdb5c729088da07a46c409d20f5d86a8b209\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"15b4577a752145cd468512a4c213d05baf41db9ee28e3fc7e7206bff542cfe20\"" Jan 29 11:06:18.054480 containerd[1442]: time="2025-01-29T11:06:18.054454214Z" level=info msg="StartContainer for \"15b4577a752145cd468512a4c213d05baf41db9ee28e3fc7e7206bff542cfe20\"" Jan 29 11:06:18.079859 systemd[1]: Started cri-containerd-15b4577a752145cd468512a4c213d05baf41db9ee28e3fc7e7206bff542cfe20.scope - libcontainer container 15b4577a752145cd468512a4c213d05baf41db9ee28e3fc7e7206bff542cfe20. Jan 29 11:06:18.105761 systemd[1]: cri-containerd-15b4577a752145cd468512a4c213d05baf41db9ee28e3fc7e7206bff542cfe20.scope: Deactivated successfully. Jan 29 11:06:18.106811 containerd[1442]: time="2025-01-29T11:06:18.106664768Z" level=info msg="StartContainer for \"15b4577a752145cd468512a4c213d05baf41db9ee28e3fc7e7206bff542cfe20\" returns successfully" Jan 29 11:06:18.125718 containerd[1442]: time="2025-01-29T11:06:18.125658235Z" level=info msg="shim disconnected" id=15b4577a752145cd468512a4c213d05baf41db9ee28e3fc7e7206bff542cfe20 namespace=k8s.io Jan 29 11:06:18.125718 containerd[1442]: time="2025-01-29T11:06:18.125717115Z" level=warning msg="cleaning up after shim disconnected" id=15b4577a752145cd468512a4c213d05baf41db9ee28e3fc7e7206bff542cfe20 namespace=k8s.io Jan 29 11:06:18.125891 containerd[1442]: time="2025-01-29T11:06:18.125725755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:19.043568 kubelet[2515]: E0129 11:06:19.042067 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:06:19.044005 containerd[1442]: time="2025-01-29T11:06:19.043586799Z" level=info msg="CreateContainer within sandbox \"3684bd3e155a1874d2c9e20575b3cdb5c729088da07a46c409d20f5d86a8b209\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:06:19.076215 containerd[1442]: time="2025-01-29T11:06:19.075947362Z" level=info msg="CreateContainer within sandbox \"3684bd3e155a1874d2c9e20575b3cdb5c729088da07a46c409d20f5d86a8b209\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d393bbc450dace32362b8ec939544f91bb51d40aca34ead74d6ca3740757af64\"" Jan 29 11:06:19.076876 containerd[1442]: time="2025-01-29T11:06:19.076793437Z" level=info msg="StartContainer for \"d393bbc450dace32362b8ec939544f91bb51d40aca34ead74d6ca3740757af64\"" Jan 29 11:06:19.119889 systemd[1]: Started cri-containerd-d393bbc450dace32362b8ec939544f91bb51d40aca34ead74d6ca3740757af64.scope - libcontainer container d393bbc450dace32362b8ec939544f91bb51d40aca34ead74d6ca3740757af64. Jan 29 11:06:19.146052 containerd[1442]: time="2025-01-29T11:06:19.146006735Z" level=info msg="StartContainer for \"d393bbc450dace32362b8ec939544f91bb51d40aca34ead74d6ca3740757af64\" returns successfully" Jan 29 11:06:19.146829 systemd[1]: cri-containerd-d393bbc450dace32362b8ec939544f91bb51d40aca34ead74d6ca3740757af64.scope: Deactivated successfully. Jan 29 11:06:19.168937 containerd[1442]: time="2025-01-29T11:06:19.168878676Z" level=info msg="shim disconnected" id=d393bbc450dace32362b8ec939544f91bb51d40aca34ead74d6ca3740757af64 namespace=k8s.io Jan 29 11:06:19.168937 containerd[1442]: time="2025-01-29T11:06:19.168931796Z" level=warning msg="cleaning up after shim disconnected" id=d393bbc450dace32362b8ec939544f91bb51d40aca34ead74d6ca3740757af64 namespace=k8s.io Jan 29 11:06:19.168937 containerd[1442]: time="2025-01-29T11:06:19.168941276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:19.416932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d393bbc450dace32362b8ec939544f91bb51d40aca34ead74d6ca3740757af64-rootfs.mount: Deactivated successfully. Jan 29 11:06:20.045843 kubelet[2515]: E0129 11:06:20.045811 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:06:20.047963 containerd[1442]: time="2025-01-29T11:06:20.047836207Z" level=info msg="CreateContainer within sandbox \"3684bd3e155a1874d2c9e20575b3cdb5c729088da07a46c409d20f5d86a8b209\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:06:20.061192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321199644.mount: Deactivated successfully. Jan 29 11:06:20.063150 containerd[1442]: time="2025-01-29T11:06:20.063108168Z" level=info msg="CreateContainer within sandbox \"3684bd3e155a1874d2c9e20575b3cdb5c729088da07a46c409d20f5d86a8b209\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e2ef76d2d3381d53c337fd5896c390f55146f43b800f97d7b5eeefa5c520ade5\"" Jan 29 11:06:20.064652 containerd[1442]: time="2025-01-29T11:06:20.063676045Z" level=info msg="StartContainer for \"e2ef76d2d3381d53c337fd5896c390f55146f43b800f97d7b5eeefa5c520ade5\"" Jan 29 11:06:20.112878 systemd[1]: Started cri-containerd-e2ef76d2d3381d53c337fd5896c390f55146f43b800f97d7b5eeefa5c520ade5.scope - libcontainer container e2ef76d2d3381d53c337fd5896c390f55146f43b800f97d7b5eeefa5c520ade5. Jan 29 11:06:20.139550 systemd[1]: cri-containerd-e2ef76d2d3381d53c337fd5896c390f55146f43b800f97d7b5eeefa5c520ade5.scope: Deactivated successfully. Jan 29 11:06:20.142298 containerd[1442]: time="2025-01-29T11:06:20.142254637Z" level=info msg="StartContainer for \"e2ef76d2d3381d53c337fd5896c390f55146f43b800f97d7b5eeefa5c520ade5\" returns successfully" Jan 29 11:06:20.167339 containerd[1442]: time="2025-01-29T11:06:20.167239867Z" level=info msg="shim disconnected" id=e2ef76d2d3381d53c337fd5896c390f55146f43b800f97d7b5eeefa5c520ade5 namespace=k8s.io Jan 29 11:06:20.167339 containerd[1442]: time="2025-01-29T11:06:20.167296867Z" level=warning msg="cleaning up after shim disconnected" id=e2ef76d2d3381d53c337fd5896c390f55146f43b800f97d7b5eeefa5c520ade5 namespace=k8s.io Jan 29 11:06:20.167339 containerd[1442]: time="2025-01-29T11:06:20.167304947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:20.416806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2ef76d2d3381d53c337fd5896c390f55146f43b800f97d7b5eeefa5c520ade5-rootfs.mount: Deactivated successfully. Jan 29 11:06:20.890439 kubelet[2515]: E0129 11:06:20.890401 2515 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:06:21.051580 kubelet[2515]: E0129 11:06:21.050886 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:06:21.054612 containerd[1442]: time="2025-01-29T11:06:21.053879548Z" level=info msg="CreateContainer within sandbox \"3684bd3e155a1874d2c9e20575b3cdb5c729088da07a46c409d20f5d86a8b209\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:06:21.072477 containerd[1442]: time="2025-01-29T11:06:21.071504831Z" level=info msg="CreateContainer within sandbox \"3684bd3e155a1874d2c9e20575b3cdb5c729088da07a46c409d20f5d86a8b209\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"37a79cf7dc0229a77cdff8bc84514bd0e848c75edd6630378cc2427aecb9d5e0\"" Jan 29 11:06:21.073101 containerd[1442]: time="2025-01-29T11:06:21.073054505Z" level=info msg="StartContainer for \"37a79cf7dc0229a77cdff8bc84514bd0e848c75edd6630378cc2427aecb9d5e0\"" Jan 29 11:06:21.099984 systemd[1]: Started cri-containerd-37a79cf7dc0229a77cdff8bc84514bd0e848c75edd6630378cc2427aecb9d5e0.scope - libcontainer container 37a79cf7dc0229a77cdff8bc84514bd0e848c75edd6630378cc2427aecb9d5e0. Jan 29 11:06:21.125734 containerd[1442]: time="2025-01-29T11:06:21.125596917Z" level=info msg="StartContainer for \"37a79cf7dc0229a77cdff8bc84514bd0e848c75edd6630378cc2427aecb9d5e0\" returns successfully" Jan 29 11:06:21.399886 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 11:06:21.816256 kubelet[2515]: E0129 11:06:21.816148 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:06:22.054156 kubelet[2515]: E0129 11:06:22.054110 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:06:22.069357 kubelet[2515]: I0129 11:06:22.069238 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6sbnt" podStartSLOduration=5.06922449 podStartE2EDuration="5.06922449s" podCreationTimestamp="2025-01-29 11:06:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:06:22.06909857 +0000 UTC m=+81.320633852" watchObservedRunningTime="2025-01-29 11:06:22.06922449 +0000 UTC m=+81.320759772" Jan 29 11:06:22.413659 kubelet[2515]: I0129 11:06:22.413598 2515 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T11:06:22Z","lastTransitionTime":"2025-01-29T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 11:06:23.589671 kubelet[2515]: E0129 11:06:23.589538 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:06:24.207483 systemd-networkd[1372]: lxc_health: Link UP Jan 29 11:06:24.218719 systemd-networkd[1372]: lxc_health: Gained carrier Jan 29 11:06:25.593191 kubelet[2515]: E0129 11:06:25.591840 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:06:25.931245 systemd[1]: run-containerd-runc-k8s.io-37a79cf7dc0229a77cdff8bc84514bd0e848c75edd6630378cc2427aecb9d5e0-runc.C9m2Z0.mount: Deactivated successfully. Jan 29 11:06:26.013849 systemd-networkd[1372]: lxc_health: Gained IPv6LL Jan 29 11:06:26.065545 kubelet[2515]: E0129 11:06:26.065290 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:06:27.066443 kubelet[2515]: E0129 11:06:27.066395 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:06:30.240421 sshd[4345]: Connection closed by 10.0.0.1 port 38088 Jan 29 11:06:30.243024 sshd-session[4343]: pam_unix(sshd:session): session closed for user core Jan 29 11:06:30.245983 systemd[1]: sshd@25-10.0.0.81:22-10.0.0.1:38088.service: Deactivated successfully. Jan 29 11:06:30.247855 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:06:30.249188 systemd-logind[1421]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:06:30.250470 systemd-logind[1421]: Removed session 26.