Jan 13 20:09:14.909296 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:09:14.909317 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:09:14.909327 kernel: KASLR enabled Jan 13 20:09:14.909333 kernel: efi: EFI v2.7 by EDK II Jan 13 20:09:14.909339 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 13 20:09:14.909357 kernel: random: crng init done Jan 13 20:09:14.909367 kernel: secureboot: Secure boot disabled Jan 13 20:09:14.909374 kernel: ACPI: Early table checksum verification disabled Jan 13 20:09:14.909382 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 13 20:09:14.909393 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:09:14.909400 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:09:14.909406 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:09:14.909412 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:09:14.909418 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:09:14.909425 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:09:14.909433 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:09:14.909439 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:09:14.909445 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:09:14.909451 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:09:14.909457 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 20:09:14.909464 kernel: NUMA: Failed to initialise from firmware Jan 13 20:09:14.909470 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:09:14.909476 kernel: NUMA: NODE_DATA [mem 0xdc95b800-0xdc960fff] Jan 13 20:09:14.909482 kernel: Zone ranges: Jan 13 20:09:14.909488 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:09:14.909495 kernel: DMA32 empty Jan 13 20:09:14.909501 kernel: Normal empty Jan 13 20:09:14.909507 kernel: Movable zone start for each node Jan 13 20:09:14.909513 kernel: Early memory node ranges Jan 13 20:09:14.909520 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 20:09:14.909526 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 20:09:14.909532 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 20:09:14.909539 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 20:09:14.909545 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 20:09:14.909552 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 20:09:14.909558 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 20:09:14.909564 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:09:14.909571 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 20:09:14.909578 kernel: psci: probing for conduit method from ACPI. Jan 13 20:09:14.909584 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:09:14.909601 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:09:14.909608 kernel: psci: Trusted OS migration not required Jan 13 20:09:14.909615 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:09:14.909623 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:09:14.909630 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:09:14.909637 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:09:14.909643 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 20:09:14.909650 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:09:14.909657 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:09:14.909663 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:09:14.909670 kernel: CPU features: detected: Spectre-v4 Jan 13 20:09:14.909676 kernel: CPU features: detected: Spectre-BHB Jan 13 20:09:14.909683 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:09:14.909691 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:09:14.909698 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:09:14.909704 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:09:14.909711 kernel: alternatives: applying boot alternatives Jan 13 20:09:14.909718 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:09:14.909725 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:09:14.909732 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:09:14.909744 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:09:14.909750 kernel: Fallback order for Node 0: 0 Jan 13 20:09:14.909757 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 20:09:14.909764 kernel: Policy zone: DMA Jan 13 20:09:14.909772 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:09:14.909778 kernel: software IO TLB: area num 4. Jan 13 20:09:14.909785 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 20:09:14.909792 kernel: Memory: 2386336K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185952K reserved, 0K cma-reserved) Jan 13 20:09:14.909799 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:09:14.909805 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:09:14.909812 kernel: rcu: RCU event tracing is enabled. Jan 13 20:09:14.909819 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:09:14.909826 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:09:14.909832 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:09:14.909839 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:09:14.909845 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:09:14.909853 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:09:14.909860 kernel: GICv3: 256 SPIs implemented Jan 13 20:09:14.909866 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:09:14.909873 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:09:14.909879 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:09:14.909886 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:09:14.909892 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:09:14.909899 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:09:14.909906 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:09:14.909913 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 20:09:14.909919 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 20:09:14.909927 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:09:14.909934 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:09:14.909940 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:09:14.909947 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:09:14.909954 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:09:14.909960 kernel: arm-pv: using stolen time PV Jan 13 20:09:14.909967 kernel: Console: colour dummy device 80x25 Jan 13 20:09:14.909974 kernel: ACPI: Core revision 20230628 Jan 13 20:09:14.909981 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:09:14.909988 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:09:14.909997 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:09:14.910004 kernel: landlock: Up and running. Jan 13 20:09:14.910011 kernel: SELinux: Initializing. Jan 13 20:09:14.910018 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:09:14.910024 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:09:14.910031 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:09:14.910040 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:09:14.910048 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:09:14.910055 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:09:14.910063 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:09:14.910070 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:09:14.910077 kernel: Remapping and enabling EFI services. Jan 13 20:09:14.910083 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:09:14.910090 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:09:14.910097 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:09:14.910103 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 20:09:14.910111 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:09:14.910117 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:09:14.910124 kernel: Detected PIPT I-cache on CPU2 Jan 13 20:09:14.910132 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 20:09:14.910139 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 20:09:14.910151 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:09:14.910159 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 20:09:14.910166 kernel: Detected PIPT I-cache on CPU3 Jan 13 20:09:14.910173 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 20:09:14.910180 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 20:09:14.910188 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:09:14.910195 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 20:09:14.910204 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:09:14.910211 kernel: SMP: Total of 4 processors activated. Jan 13 20:09:14.910218 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:09:14.910225 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:09:14.910232 kernel: CPU features: detected: Common not Private translations Jan 13 20:09:14.910239 kernel: CPU features: detected: CRC32 instructions Jan 13 20:09:14.910246 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:09:14.910253 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:09:14.910262 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:09:14.910268 kernel: CPU features: detected: Privileged Access Never Jan 13 20:09:14.910275 kernel: CPU features: detected: RAS Extension Support Jan 13 20:09:14.910282 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:09:14.910289 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:09:14.910296 kernel: alternatives: applying system-wide alternatives Jan 13 20:09:14.910303 kernel: devtmpfs: initialized Jan 13 20:09:14.910311 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:09:14.910318 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:09:14.910326 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:09:14.910333 kernel: SMBIOS 3.0.0 present. Jan 13 20:09:14.910340 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 13 20:09:14.910351 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:09:14.910359 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:09:14.910366 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:09:14.910373 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:09:14.910381 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:09:14.910388 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 13 20:09:14.910397 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:09:14.910404 kernel: cpuidle: using governor menu Jan 13 20:09:14.910411 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:09:14.910418 kernel: ASID allocator initialised with 32768 entries Jan 13 20:09:14.910425 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:09:14.910433 kernel: Serial: AMBA PL011 UART driver Jan 13 20:09:14.910440 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:09:14.910453 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:09:14.910461 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:09:14.910470 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:09:14.910477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:09:14.910484 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:09:14.910492 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:09:14.910499 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:09:14.910506 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:09:14.910513 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:09:14.910520 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:09:14.910527 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:09:14.910535 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:09:14.910542 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:09:14.910549 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:09:14.910556 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:09:14.910563 kernel: ACPI: Interpreter enabled Jan 13 20:09:14.910570 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:09:14.910577 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:09:14.910585 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:09:14.910592 kernel: printk: console [ttyAMA0] enabled Jan 13 20:09:14.910612 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:09:14.910765 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:09:14.910838 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:09:14.910901 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:09:14.910963 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:09:14.911025 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:09:14.911035 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:09:14.911045 kernel: PCI host bridge to bus 0000:00 Jan 13 20:09:14.911112 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:09:14.911169 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:09:14.911225 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:09:14.911280 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:09:14.911365 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:09:14.911440 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:09:14.911508 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 20:09:14.911574 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 20:09:14.911662 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:09:14.911728 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:09:14.911803 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 20:09:14.911867 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 20:09:14.911923 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:09:14.911981 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:09:14.912036 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:09:14.912046 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:09:14.912053 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:09:14.912060 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:09:14.912067 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:09:14.912074 kernel: iommu: Default domain type: Translated Jan 13 20:09:14.912082 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:09:14.912090 kernel: efivars: Registered efivars operations Jan 13 20:09:14.912097 kernel: vgaarb: loaded Jan 13 20:09:14.912104 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:09:14.912111 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:09:14.912119 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:09:14.912126 kernel: pnp: PnP ACPI init Jan 13 20:09:14.912193 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:09:14.912203 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:09:14.912212 kernel: NET: Registered PF_INET protocol family Jan 13 20:09:14.912219 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:09:14.912227 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:09:14.912234 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:09:14.912241 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:09:14.912248 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:09:14.912255 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:09:14.912263 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:09:14.912270 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:09:14.912278 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:09:14.912286 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:09:14.912293 kernel: kvm [1]: HYP mode not available Jan 13 20:09:14.912300 kernel: Initialise system trusted keyrings Jan 13 20:09:14.912307 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:09:14.912314 kernel: Key type asymmetric registered Jan 13 20:09:14.912321 kernel: Asymmetric key parser 'x509' registered Jan 13 20:09:14.912328 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:09:14.912335 kernel: io scheduler mq-deadline registered Jan 13 20:09:14.912344 kernel: io scheduler kyber registered Jan 13 20:09:14.912356 kernel: io scheduler bfq registered Jan 13 20:09:14.912363 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:09:14.912370 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:09:14.912378 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:09:14.912446 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 20:09:14.912456 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:09:14.912464 kernel: thunder_xcv, ver 1.0 Jan 13 20:09:14.912471 kernel: thunder_bgx, ver 1.0 Jan 13 20:09:14.912480 kernel: nicpf, ver 1.0 Jan 13 20:09:14.912487 kernel: nicvf, ver 1.0 Jan 13 20:09:14.912557 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:09:14.912633 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:09:14 UTC (1736798954) Jan 13 20:09:14.912644 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:09:14.912651 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:09:14.912658 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:09:14.912665 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:09:14.912675 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:09:14.912682 kernel: Segment Routing with IPv6 Jan 13 20:09:14.912689 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:09:14.912696 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:09:14.912703 kernel: Key type dns_resolver registered Jan 13 20:09:14.912710 kernel: registered taskstats version 1 Jan 13 20:09:14.912717 kernel: Loading compiled-in X.509 certificates Jan 13 20:09:14.912724 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:09:14.912731 kernel: Key type .fscrypt registered Jan 13 20:09:14.912745 kernel: Key type fscrypt-provisioning registered Jan 13 20:09:14.912753 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:09:14.912760 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:09:14.912767 kernel: ima: No architecture policies found Jan 13 20:09:14.912774 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:09:14.912781 kernel: clk: Disabling unused clocks Jan 13 20:09:14.912788 kernel: Freeing unused kernel memory: 39680K Jan 13 20:09:14.912795 kernel: Run /init as init process Jan 13 20:09:14.912802 kernel: with arguments: Jan 13 20:09:14.912810 kernel: /init Jan 13 20:09:14.912817 kernel: with environment: Jan 13 20:09:14.912824 kernel: HOME=/ Jan 13 20:09:14.912831 kernel: TERM=linux Jan 13 20:09:14.912838 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:09:14.912847 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:09:14.912856 systemd[1]: Detected virtualization kvm. Jan 13 20:09:14.912863 systemd[1]: Detected architecture arm64. Jan 13 20:09:14.912872 systemd[1]: Running in initrd. Jan 13 20:09:14.912880 systemd[1]: No hostname configured, using default hostname. Jan 13 20:09:14.912887 systemd[1]: Hostname set to . Jan 13 20:09:14.912895 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:09:14.912903 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:09:14.912910 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:09:14.912921 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:09:14.912929 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:09:14.912939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:09:14.912947 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:09:14.912955 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:09:14.912964 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:09:14.912972 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:09:14.912980 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:09:14.912987 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:09:14.912997 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:09:14.913007 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:09:14.913015 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:09:14.913022 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:09:14.913030 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:09:14.913040 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:09:14.913047 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:09:14.913055 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:09:14.913064 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:09:14.913072 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:09:14.913079 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:09:14.913087 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:09:14.913095 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:09:14.913102 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:09:14.913110 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:09:14.913118 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:09:14.913125 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:09:14.913134 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:09:14.913142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:09:14.913149 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:09:14.913157 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:09:14.913165 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:09:14.913189 systemd-journald[238]: Collecting audit messages is disabled. Jan 13 20:09:14.913209 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:09:14.913217 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:09:14.913226 kernel: Bridge firewalling registered Jan 13 20:09:14.913233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:09:14.913241 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:09:14.913249 systemd-journald[238]: Journal started Jan 13 20:09:14.913267 systemd-journald[238]: Runtime Journal (/run/log/journal/5efeff867de048ea94024241899c462b) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:09:14.894853 systemd-modules-load[239]: Inserted module 'overlay' Jan 13 20:09:14.910011 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 13 20:09:14.916661 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:09:14.918627 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:09:14.932730 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:09:14.934124 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:09:14.935714 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:09:14.938483 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:09:14.946159 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:09:14.948344 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:09:14.949278 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:09:14.960753 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:09:14.961766 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:09:14.964112 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:09:14.976925 dracut-cmdline[281]: dracut-dracut-053 Jan 13 20:09:14.979252 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:09:14.986647 systemd-resolved[279]: Positive Trust Anchors: Jan 13 20:09:14.986719 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:09:14.986759 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:09:14.991462 systemd-resolved[279]: Defaulting to hostname 'linux'. Jan 13 20:09:14.992421 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:09:14.994307 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:09:15.053625 kernel: SCSI subsystem initialized Jan 13 20:09:15.060615 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:09:15.067620 kernel: iscsi: registered transport (tcp) Jan 13 20:09:15.080613 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:09:15.080628 kernel: QLogic iSCSI HBA Driver Jan 13 20:09:15.121301 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:09:15.137803 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:09:15.155393 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:09:15.155457 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:09:15.155485 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:09:15.208620 kernel: raid6: neonx8 gen() 15675 MB/s Jan 13 20:09:15.225624 kernel: raid6: neonx4 gen() 15596 MB/s Jan 13 20:09:15.242611 kernel: raid6: neonx2 gen() 13294 MB/s Jan 13 20:09:15.259609 kernel: raid6: neonx1 gen() 10467 MB/s Jan 13 20:09:15.276618 kernel: raid6: int64x8 gen() 6938 MB/s Jan 13 20:09:15.293609 kernel: raid6: int64x4 gen() 7330 MB/s Jan 13 20:09:15.310609 kernel: raid6: int64x2 gen() 6105 MB/s Jan 13 20:09:15.327610 kernel: raid6: int64x1 gen() 5036 MB/s Jan 13 20:09:15.327623 kernel: raid6: using algorithm neonx8 gen() 15675 MB/s Jan 13 20:09:15.344624 kernel: raid6: .... xor() 11894 MB/s, rmw enabled Jan 13 20:09:15.344646 kernel: raid6: using neon recovery algorithm Jan 13 20:09:15.350614 kernel: xor: measuring software checksum speed Jan 13 20:09:15.351668 kernel: 8regs : 18011 MB/sec Jan 13 20:09:15.351684 kernel: 32regs : 19679 MB/sec Jan 13 20:09:15.352625 kernel: arm64_neon : 26130 MB/sec Jan 13 20:09:15.352637 kernel: xor: using function: arm64_neon (26130 MB/sec) Jan 13 20:09:15.402638 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:09:15.412439 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:09:15.420750 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:09:15.431914 systemd-udevd[464]: Using default interface naming scheme 'v255'. Jan 13 20:09:15.434923 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:09:15.437037 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:09:15.450853 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jan 13 20:09:15.474629 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:09:15.490781 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:09:15.528631 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:09:15.535787 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:09:15.546939 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:09:15.548330 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:09:15.550077 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:09:15.551714 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:09:15.559819 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:09:15.568193 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:09:15.575624 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 20:09:15.585278 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:09:15.585391 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:09:15.585402 kernel: GPT:9289727 != 19775487 Jan 13 20:09:15.585411 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:09:15.585420 kernel: GPT:9289727 != 19775487 Jan 13 20:09:15.585431 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:09:15.585441 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:09:15.576681 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:09:15.576792 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:09:15.578937 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:09:15.579845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:09:15.580023 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:09:15.581081 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:09:15.595876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:09:15.598819 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (509) Jan 13 20:09:15.604209 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (515) Jan 13 20:09:15.612190 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:09:15.614198 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:09:15.622052 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:09:15.626391 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:09:15.629971 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:09:15.630964 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:09:15.647759 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:09:15.649382 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:09:15.655063 disk-uuid[552]: Primary Header is updated. Jan 13 20:09:15.655063 disk-uuid[552]: Secondary Entries is updated. Jan 13 20:09:15.655063 disk-uuid[552]: Secondary Header is updated. Jan 13 20:09:15.658621 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:09:15.672944 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:09:16.667403 disk-uuid[554]: The operation has completed successfully. Jan 13 20:09:16.668300 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:09:16.684308 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:09:16.684404 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:09:16.710791 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:09:16.713585 sh[574]: Success Jan 13 20:09:16.727328 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:09:16.762981 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:09:16.764798 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:09:16.766625 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:09:16.775042 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:09:16.775076 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:09:16.775093 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:09:16.775824 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:09:16.776835 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:09:16.779755 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:09:16.781003 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:09:16.787791 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:09:16.789083 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:09:16.797715 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:09:16.797758 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:09:16.798607 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:09:16.800611 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:09:16.806773 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:09:16.808186 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:09:16.813554 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:09:16.820763 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:09:16.883251 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:09:16.891773 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:09:16.925118 systemd-networkd[763]: lo: Link UP Jan 13 20:09:16.925126 systemd-networkd[763]: lo: Gained carrier Jan 13 20:09:16.925894 systemd-networkd[763]: Enumeration completed Jan 13 20:09:16.926080 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:09:16.926469 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:09:16.928220 ignition[665]: Ignition 2.20.0 Jan 13 20:09:16.926473 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:09:16.928226 ignition[665]: Stage: fetch-offline Jan 13 20:09:16.927460 systemd[1]: Reached target network.target - Network. Jan 13 20:09:16.928266 ignition[665]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:09:16.927610 systemd-networkd[763]: eth0: Link UP Jan 13 20:09:16.928274 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:09:16.927615 systemd-networkd[763]: eth0: Gained carrier Jan 13 20:09:16.928425 ignition[665]: parsed url from cmdline: "" Jan 13 20:09:16.927623 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:09:16.928428 ignition[665]: no config URL provided Jan 13 20:09:16.928440 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:09:16.939638 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:09:16.928447 ignition[665]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:09:16.928472 ignition[665]: op(1): [started] loading QEMU firmware config module Jan 13 20:09:16.928478 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:09:16.933592 ignition[665]: op(1): [finished] loading QEMU firmware config module Jan 13 20:09:16.978370 ignition[665]: parsing config with SHA512: d89c3d9d0f8720b1b3837af764e524b67ca08c497388f653c342abd655c2e690193c296228e8d25f00ee61179603de3eba950a6af11c1407cd5ab588a573096d Jan 13 20:09:16.983709 unknown[665]: fetched base config from "system" Jan 13 20:09:16.983720 unknown[665]: fetched user config from "qemu" Jan 13 20:09:16.984162 ignition[665]: fetch-offline: fetch-offline passed Jan 13 20:09:16.984250 ignition[665]: Ignition finished successfully Jan 13 20:09:16.986663 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:09:16.987892 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:09:16.995759 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:09:17.006128 ignition[770]: Ignition 2.20.0 Jan 13 20:09:17.006138 ignition[770]: Stage: kargs Jan 13 20:09:17.006297 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:09:17.006307 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:09:17.007279 ignition[770]: kargs: kargs passed Jan 13 20:09:17.007326 ignition[770]: Ignition finished successfully Jan 13 20:09:17.010658 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:09:17.018807 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:09:17.028994 ignition[779]: Ignition 2.20.0 Jan 13 20:09:17.029003 ignition[779]: Stage: disks Jan 13 20:09:17.029176 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:09:17.029185 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:09:17.030085 ignition[779]: disks: disks passed Jan 13 20:09:17.030131 ignition[779]: Ignition finished successfully Jan 13 20:09:17.032635 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:09:17.033872 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:09:17.035138 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:09:17.036745 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:09:17.038455 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:09:17.039868 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:09:17.055845 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:09:17.067065 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:09:17.071349 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:09:17.074317 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:09:17.122627 kernel: EXT4-fs (vda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:09:17.122661 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:09:17.123716 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:09:17.134667 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:09:17.136248 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:09:17.137238 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:09:17.137309 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:09:17.137357 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:09:17.143615 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) Jan 13 20:09:17.143202 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:09:17.145242 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:09:17.149286 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:09:17.149306 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:09:17.149316 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:09:17.149326 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:09:17.150940 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:09:17.188149 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:09:17.192025 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:09:17.195341 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:09:17.198195 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:09:17.266369 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:09:17.284708 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:09:17.287229 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:09:17.291617 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:09:17.308039 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:09:17.309380 ignition[912]: INFO : Ignition 2.20.0 Jan 13 20:09:17.309380 ignition[912]: INFO : Stage: mount Jan 13 20:09:17.309380 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:09:17.309380 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:09:17.312747 ignition[912]: INFO : mount: mount passed Jan 13 20:09:17.312747 ignition[912]: INFO : Ignition finished successfully Jan 13 20:09:17.311221 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:09:17.320673 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:09:17.774183 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:09:17.783797 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:09:17.790234 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (926) Jan 13 20:09:17.790268 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:09:17.791389 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:09:17.791402 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:09:17.793609 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:09:17.795040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:09:17.811424 ignition[943]: INFO : Ignition 2.20.0 Jan 13 20:09:17.811424 ignition[943]: INFO : Stage: files Jan 13 20:09:17.812736 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:09:17.812736 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:09:17.812736 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:09:17.815565 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:09:17.815565 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:09:17.815565 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:09:17.815565 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:09:17.815565 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:09:17.815354 unknown[943]: wrote ssh authorized keys file for user: core Jan 13 20:09:17.821847 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:09:17.821847 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:09:17.872910 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:09:18.065779 systemd-networkd[763]: eth0: Gained IPv6LL Jan 13 20:09:18.359921 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:09:18.359921 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:09:18.363032 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 20:09:18.685255 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:09:18.742755 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:09:18.744171 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:09:18.744171 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:09:18.744171 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:09:18.744171 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:09:18.744171 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:09:18.744171 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:09:18.744171 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:09:18.744171 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:09:18.744171 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:09:18.744171 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:09:18.744171 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:09:18.744171 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:09:18.744171 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:09:18.744171 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 20:09:18.979117 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:09:19.242727 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:09:19.242727 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:09:19.245540 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:09:19.245540 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:09:19.245540 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:09:19.245540 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 13 20:09:19.245540 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:09:19.245540 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:09:19.245540 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 13 20:09:19.245540 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:09:19.265140 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:09:19.269526 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:09:19.269526 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:09:19.269526 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:09:19.269526 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:09:19.269526 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:09:19.269526 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:09:19.269526 ignition[943]: INFO : files: files passed Jan 13 20:09:19.269526 ignition[943]: INFO : Ignition finished successfully Jan 13 20:09:19.271026 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:09:19.281998 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:09:19.283946 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:09:19.285259 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:09:19.285338 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:09:19.291312 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:09:19.293488 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:09:19.293488 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:09:19.296305 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:09:19.295246 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:09:19.297752 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:09:19.312764 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:09:19.332844 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:09:19.332966 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:09:19.334798 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:09:19.336165 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:09:19.337800 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:09:19.338529 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:09:19.354085 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:09:19.363784 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:09:19.371560 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:09:19.372503 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:09:19.374111 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:09:19.375395 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:09:19.375512 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:09:19.377551 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:09:19.379100 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:09:19.380272 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:09:19.381508 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:09:19.382942 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:09:19.384454 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:09:19.385872 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:09:19.387355 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:09:19.388868 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:09:19.390212 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:09:19.391315 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:09:19.391441 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:09:19.393192 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:09:19.394699 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:09:19.396275 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:09:19.399654 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:09:19.400620 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:09:19.400751 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:09:19.402932 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:09:19.403040 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:09:19.404564 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:09:19.405815 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:09:19.411662 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:09:19.412693 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:09:19.414379 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:09:19.415673 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:09:19.415768 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:09:19.416940 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:09:19.417014 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:09:19.418223 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:09:19.418330 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:09:19.419726 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:09:19.419821 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:09:19.431790 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:09:19.432472 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:09:19.432617 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:09:19.435258 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:09:19.436187 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:09:19.436302 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:09:19.437734 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:09:19.437926 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:09:19.444748 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:09:19.446356 ignition[999]: INFO : Ignition 2.20.0 Jan 13 20:09:19.446356 ignition[999]: INFO : Stage: umount Jan 13 20:09:19.446356 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:09:19.446356 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:09:19.450474 ignition[999]: INFO : umount: umount passed Jan 13 20:09:19.450474 ignition[999]: INFO : Ignition finished successfully Jan 13 20:09:19.446642 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:09:19.449037 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:09:19.449531 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:09:19.449680 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:09:19.452994 systemd[1]: Stopped target network.target - Network. Jan 13 20:09:19.454181 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:09:19.454244 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:09:19.455515 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:09:19.455559 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:09:19.457164 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:09:19.457213 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:09:19.458395 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:09:19.458436 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:09:19.459862 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:09:19.461082 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:09:19.468662 systemd-networkd[763]: eth0: DHCPv6 lease lost Jan 13 20:09:19.470112 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:09:19.470249 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:09:19.472905 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:09:19.473095 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:09:19.475033 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:09:19.475104 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:09:19.485702 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:09:19.486404 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:09:19.486459 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:09:19.488188 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:09:19.488233 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:09:19.489507 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:09:19.489549 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:09:19.491207 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:09:19.491246 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:09:19.492707 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:09:19.502397 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:09:19.502552 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:09:19.505301 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:09:19.506106 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:09:19.507645 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:09:19.507696 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:09:19.509047 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:09:19.509089 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:09:19.510428 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:09:19.510473 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:09:19.512542 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:09:19.512587 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:09:19.514660 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:09:19.514699 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:09:19.521767 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:09:19.522550 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:09:19.522612 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:09:19.524267 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:09:19.524305 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:09:19.525962 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:09:19.526038 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:09:19.527346 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:09:19.527424 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:09:19.529318 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:09:19.530239 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:09:19.530299 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:09:19.532370 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:09:19.541256 systemd[1]: Switching root. Jan 13 20:09:19.571493 systemd-journald[238]: Journal stopped Jan 13 20:09:20.230575 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 13 20:09:20.230657 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:09:20.230669 kernel: SELinux: policy capability open_perms=1 Jan 13 20:09:20.230679 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:09:20.230691 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:09:20.230702 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:09:20.230721 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:09:20.230731 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:09:20.230741 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:09:20.230751 kernel: audit: type=1403 audit(1736798959.723:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:09:20.230762 systemd[1]: Successfully loaded SELinux policy in 30.102ms. Jan 13 20:09:20.230782 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.361ms. Jan 13 20:09:20.230794 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:09:20.230807 systemd[1]: Detected virtualization kvm. Jan 13 20:09:20.230817 systemd[1]: Detected architecture arm64. Jan 13 20:09:20.230828 systemd[1]: Detected first boot. Jan 13 20:09:20.230838 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:09:20.230849 zram_generator::config[1043]: No configuration found. Jan 13 20:09:20.230860 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:09:20.230870 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:09:20.230881 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:09:20.230893 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:09:20.230904 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:09:20.230914 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:09:20.230926 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:09:20.230936 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:09:20.230947 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:09:20.230961 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:09:20.230972 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:09:20.230982 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:09:20.231002 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:09:20.231015 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:09:20.231026 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:09:20.231036 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:09:20.231051 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:09:20.231062 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:09:20.231072 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:09:20.231083 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:09:20.231093 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:09:20.231106 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:09:20.231117 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:09:20.231128 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:09:20.231138 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:09:20.231149 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:09:20.231160 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:09:20.231171 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:09:20.231181 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:09:20.231193 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:09:20.231205 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:09:20.231215 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:09:20.231227 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:09:20.231237 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:09:20.231251 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:09:20.231298 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:09:20.231312 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:09:20.231327 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:09:20.231340 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:09:20.231350 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:09:20.231362 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:09:20.231373 systemd[1]: Reached target machines.target - Containers. Jan 13 20:09:20.231384 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:09:20.231412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:09:20.231427 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:09:20.231438 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:09:20.231451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:09:20.231461 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:09:20.231472 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:09:20.231483 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:09:20.231493 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:09:20.231506 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:09:20.231516 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:09:20.231528 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:09:20.231540 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:09:20.231551 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:09:20.231561 kernel: loop: module loaded Jan 13 20:09:20.231570 kernel: fuse: init (API version 7.39) Jan 13 20:09:20.231580 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:09:20.231590 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:09:20.231609 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:09:20.231619 kernel: ACPI: bus type drm_connector registered Jan 13 20:09:20.231629 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:09:20.231641 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:09:20.231652 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:09:20.231662 systemd[1]: Stopped verity-setup.service. Jan 13 20:09:20.231692 systemd-journald[1110]: Collecting audit messages is disabled. Jan 13 20:09:20.231720 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:09:20.231731 systemd-journald[1110]: Journal started Jan 13 20:09:20.231752 systemd-journald[1110]: Runtime Journal (/run/log/journal/5efeff867de048ea94024241899c462b) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:09:20.068237 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:09:20.081193 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:09:20.081527 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:09:20.233668 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:09:20.234359 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:09:20.235320 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:09:20.236160 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:09:20.237096 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:09:20.237984 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:09:20.238919 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:09:20.240029 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:09:20.241294 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:09:20.241419 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:09:20.242549 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:09:20.242705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:09:20.243742 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:09:20.243866 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:09:20.244874 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:09:20.245000 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:09:20.246110 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:09:20.246227 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:09:20.247336 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:09:20.247462 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:09:20.248748 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:09:20.249815 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:09:20.251202 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:09:20.260758 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:09:20.268724 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:09:20.272793 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:09:20.273609 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:09:20.273642 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:09:20.275257 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:09:20.277319 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:09:20.280975 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:09:20.282139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:09:20.283721 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:09:20.285533 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:09:20.286432 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:09:20.288790 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:09:20.289730 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:09:20.293770 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:09:20.295419 systemd-journald[1110]: Time spent on flushing to /var/log/journal/5efeff867de048ea94024241899c462b is 20.247ms for 857 entries. Jan 13 20:09:20.295419 systemd-journald[1110]: System Journal (/var/log/journal/5efeff867de048ea94024241899c462b) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:09:20.327959 systemd-journald[1110]: Received client request to flush runtime journal. Jan 13 20:09:20.298179 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:09:20.300964 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:09:20.303549 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:09:20.304856 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:09:20.331689 kernel: loop0: detected capacity change from 0 to 194512 Jan 13 20:09:20.305819 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:09:20.306917 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:09:20.308194 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:09:20.317297 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:09:20.322456 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:09:20.324772 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:09:20.332076 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:09:20.339175 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:09:20.351131 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:09:20.352753 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:09:20.353008 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:09:20.355674 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:09:20.359676 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:09:20.371816 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:09:20.390631 kernel: loop1: detected capacity change from 0 to 116808 Jan 13 20:09:20.397068 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jan 13 20:09:20.397086 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jan 13 20:09:20.403719 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:09:20.424636 kernel: loop2: detected capacity change from 0 to 113536 Jan 13 20:09:20.464941 kernel: loop3: detected capacity change from 0 to 194512 Jan 13 20:09:20.474652 kernel: loop4: detected capacity change from 0 to 116808 Jan 13 20:09:20.479645 kernel: loop5: detected capacity change from 0 to 113536 Jan 13 20:09:20.484567 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:09:20.484983 (sd-merge)[1182]: Merged extensions into '/usr'. Jan 13 20:09:20.488412 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:09:20.488427 systemd[1]: Reloading... Jan 13 20:09:20.555625 zram_generator::config[1208]: No configuration found. Jan 13 20:09:20.611074 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:09:20.655898 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:09:20.691232 systemd[1]: Reloading finished in 202 ms. Jan 13 20:09:20.722749 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:09:20.723953 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:09:20.746816 systemd[1]: Starting ensure-sysext.service... Jan 13 20:09:20.748717 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:09:20.755526 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:09:20.755540 systemd[1]: Reloading... Jan 13 20:09:20.769164 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:09:20.769778 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:09:20.770749 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:09:20.771183 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 13 20:09:20.771404 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 13 20:09:20.774248 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:09:20.774446 systemd-tmpfiles[1243]: Skipping /boot Jan 13 20:09:20.784405 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:09:20.784524 systemd-tmpfiles[1243]: Skipping /boot Jan 13 20:09:20.801631 zram_generator::config[1270]: No configuration found. Jan 13 20:09:20.880818 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:09:20.916136 systemd[1]: Reloading finished in 160 ms. Jan 13 20:09:20.931432 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:09:20.947037 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:09:20.954321 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:09:20.956749 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:09:20.958690 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:09:20.963900 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:09:20.966429 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:09:20.968928 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:09:20.971852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:09:20.979845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:09:20.981622 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:09:20.984007 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:09:20.984892 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:09:20.985612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:09:20.987638 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:09:20.990810 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:09:20.993126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:09:20.993247 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:09:20.996092 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:09:20.996241 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:09:21.000146 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Jan 13 20:09:21.006818 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:09:21.017981 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:09:21.020546 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:09:21.022620 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:09:21.023580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:09:21.024937 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:09:21.030909 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:09:21.033170 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:09:21.035524 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:09:21.036049 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:09:21.038221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:09:21.038351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:09:21.042660 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:09:21.044152 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:09:21.049639 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:09:21.049800 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:09:21.062121 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:09:21.071111 systemd[1]: Finished ensure-sysext.service. Jan 13 20:09:21.073756 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 20:09:21.078801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:09:21.086732 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1350) Jan 13 20:09:21.086897 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:09:21.094820 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:09:21.096163 augenrules[1375]: No rules Jan 13 20:09:21.101768 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:09:21.102792 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:09:21.105742 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:09:21.114754 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:09:21.115583 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:09:21.116087 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:09:21.116270 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:09:21.117637 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:09:21.117873 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:09:21.119372 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:09:21.119639 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:09:21.121026 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:09:21.121446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:09:21.131657 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:09:21.137506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:09:21.147817 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:09:21.149079 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:09:21.149164 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:09:21.168187 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:09:21.203751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:09:21.208390 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:09:21.210376 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:09:21.215809 systemd-resolved[1309]: Positive Trust Anchors: Jan 13 20:09:21.216966 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:09:21.219208 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:09:21.219243 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:09:21.219486 systemd-networkd[1383]: lo: Link UP Jan 13 20:09:21.219493 systemd-networkd[1383]: lo: Gained carrier Jan 13 20:09:21.220396 systemd-networkd[1383]: Enumeration completed Jan 13 20:09:21.221253 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:09:21.221319 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:09:21.222059 systemd-networkd[1383]: eth0: Link UP Jan 13 20:09:21.222126 systemd-networkd[1383]: eth0: Gained carrier Jan 13 20:09:21.222195 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:09:21.228509 systemd-resolved[1309]: Defaulting to hostname 'linux'. Jan 13 20:09:21.234197 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:09:21.235465 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:09:21.236687 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:09:21.237932 systemd[1]: Reached target network.target - Network. Jan 13 20:09:21.238994 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:09:21.241317 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:09:21.243775 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.49/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:09:21.245087 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Jan 13 20:09:21.249318 systemd-timesyncd[1384]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:09:21.249369 systemd-timesyncd[1384]: Initial clock synchronization to Mon 2025-01-13 20:09:20.853874 UTC. Jan 13 20:09:21.252803 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:09:21.258462 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:09:21.299159 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:09:21.300327 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:09:21.301179 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:09:21.302099 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:09:21.303060 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:09:21.304148 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:09:21.305035 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:09:21.306044 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:09:21.306935 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:09:21.306970 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:09:21.307622 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:09:21.309610 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:09:21.311841 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:09:21.319397 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:09:21.321507 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:09:21.323140 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:09:21.324069 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:09:21.324852 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:09:21.325777 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:09:21.325803 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:09:21.326636 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:09:21.328390 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:09:21.330701 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:09:21.330724 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:09:21.332809 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:09:21.334821 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:09:21.336524 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:09:21.339058 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:09:21.346573 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:09:21.352809 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:09:21.358684 jq[1418]: false Jan 13 20:09:21.359525 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:09:21.362440 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:09:21.363675 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:09:21.364780 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:09:21.367725 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:09:21.370115 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:09:21.372193 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:09:21.372346 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:09:21.372614 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:09:21.372773 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:09:21.373696 extend-filesystems[1419]: Found loop3 Jan 13 20:09:21.375202 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:09:21.375512 extend-filesystems[1419]: Found loop4 Jan 13 20:09:21.376793 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:09:21.381727 extend-filesystems[1419]: Found loop5 Jan 13 20:09:21.381727 extend-filesystems[1419]: Found vda Jan 13 20:09:21.381727 extend-filesystems[1419]: Found vda1 Jan 13 20:09:21.381727 extend-filesystems[1419]: Found vda2 Jan 13 20:09:21.381727 extend-filesystems[1419]: Found vda3 Jan 13 20:09:21.381727 extend-filesystems[1419]: Found usr Jan 13 20:09:21.381727 extend-filesystems[1419]: Found vda4 Jan 13 20:09:21.381727 extend-filesystems[1419]: Found vda6 Jan 13 20:09:21.381727 extend-filesystems[1419]: Found vda7 Jan 13 20:09:21.381727 extend-filesystems[1419]: Found vda9 Jan 13 20:09:21.381727 extend-filesystems[1419]: Checking size of /dev/vda9 Jan 13 20:09:21.390781 dbus-daemon[1417]: [system] SELinux support is enabled Jan 13 20:09:21.399559 jq[1434]: true Jan 13 20:09:21.391247 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:09:21.395681 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:09:21.395721 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:09:21.399897 jq[1444]: true Jan 13 20:09:21.395861 (ntainerd)[1445]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:09:21.397239 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:09:21.397263 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:09:21.414545 update_engine[1433]: I20250113 20:09:21.413874 1433 main.cc:92] Flatcar Update Engine starting Jan 13 20:09:21.418037 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:09:21.418354 update_engine[1433]: I20250113 20:09:21.418129 1433 update_check_scheduler.cc:74] Next update check in 3m18s Jan 13 20:09:21.423478 extend-filesystems[1419]: Resized partition /dev/vda9 Jan 13 20:09:21.430828 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:09:21.440624 tar[1437]: linux-arm64/helm Jan 13 20:09:21.448615 extend-filesystems[1457]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:09:21.450726 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1350) Jan 13 20:09:21.452214 systemd-logind[1427]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:09:21.452622 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:09:21.453485 systemd-logind[1427]: New seat seat0. Jan 13 20:09:21.458905 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:09:21.492686 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:09:21.498913 locksmithd[1454]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:09:21.504249 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:09:21.504249 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:09:21.504249 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:09:21.507346 extend-filesystems[1419]: Resized filesystem in /dev/vda9 Jan 13 20:09:21.505189 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:09:21.506568 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:09:21.509058 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:09:21.511885 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:09:21.514740 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:09:21.624892 containerd[1445]: time="2025-01-13T20:09:21.624742800Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:09:21.653455 containerd[1445]: time="2025-01-13T20:09:21.653037160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:09:21.654656 containerd[1445]: time="2025-01-13T20:09:21.654620840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:09:21.655764 containerd[1445]: time="2025-01-13T20:09:21.654746320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:09:21.655764 containerd[1445]: time="2025-01-13T20:09:21.654770040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:09:21.655764 containerd[1445]: time="2025-01-13T20:09:21.654910560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:09:21.655764 containerd[1445]: time="2025-01-13T20:09:21.654927160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:09:21.655764 containerd[1445]: time="2025-01-13T20:09:21.654977960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:09:21.655764 containerd[1445]: time="2025-01-13T20:09:21.654990520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:09:21.655764 containerd[1445]: time="2025-01-13T20:09:21.655140840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:09:21.655764 containerd[1445]: time="2025-01-13T20:09:21.655154240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:09:21.655764 containerd[1445]: time="2025-01-13T20:09:21.655166560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:09:21.655764 containerd[1445]: time="2025-01-13T20:09:21.655175280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:09:21.655764 containerd[1445]: time="2025-01-13T20:09:21.655240000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:09:21.655764 containerd[1445]: time="2025-01-13T20:09:21.655417360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:09:21.656022 containerd[1445]: time="2025-01-13T20:09:21.655504520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:09:21.656022 containerd[1445]: time="2025-01-13T20:09:21.655517880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:09:21.656022 containerd[1445]: time="2025-01-13T20:09:21.655581840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:09:21.656022 containerd[1445]: time="2025-01-13T20:09:21.655640200Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:09:21.659403 containerd[1445]: time="2025-01-13T20:09:21.659379320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:09:21.659544 containerd[1445]: time="2025-01-13T20:09:21.659528480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:09:21.659615 containerd[1445]: time="2025-01-13T20:09:21.659590520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:09:21.659669 containerd[1445]: time="2025-01-13T20:09:21.659657440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:09:21.659728 containerd[1445]: time="2025-01-13T20:09:21.659715800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:09:21.659905 containerd[1445]: time="2025-01-13T20:09:21.659887720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:09:21.660197 containerd[1445]: time="2025-01-13T20:09:21.660178360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:09:21.660367 containerd[1445]: time="2025-01-13T20:09:21.660349800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:09:21.660429 containerd[1445]: time="2025-01-13T20:09:21.660417160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:09:21.660487 containerd[1445]: time="2025-01-13T20:09:21.660476400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:09:21.660541 containerd[1445]: time="2025-01-13T20:09:21.660529600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:09:21.660592 containerd[1445]: time="2025-01-13T20:09:21.660580960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:09:21.660661 containerd[1445]: time="2025-01-13T20:09:21.660648560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:09:21.660740 containerd[1445]: time="2025-01-13T20:09:21.660725680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:09:21.660794 containerd[1445]: time="2025-01-13T20:09:21.660782960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:09:21.660844 containerd[1445]: time="2025-01-13T20:09:21.660833320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:09:21.660894 containerd[1445]: time="2025-01-13T20:09:21.660883840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:09:21.660964 containerd[1445]: time="2025-01-13T20:09:21.660952480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:09:21.661025 containerd[1445]: time="2025-01-13T20:09:21.661013800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661080 containerd[1445]: time="2025-01-13T20:09:21.661068760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661140 containerd[1445]: time="2025-01-13T20:09:21.661127560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661195 containerd[1445]: time="2025-01-13T20:09:21.661184280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661243 containerd[1445]: time="2025-01-13T20:09:21.661232680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661298 containerd[1445]: time="2025-01-13T20:09:21.661286920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661348 containerd[1445]: time="2025-01-13T20:09:21.661337560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661398 containerd[1445]: time="2025-01-13T20:09:21.661387880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661449 containerd[1445]: time="2025-01-13T20:09:21.661438120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661501 containerd[1445]: time="2025-01-13T20:09:21.661490600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661550 containerd[1445]: time="2025-01-13T20:09:21.661540120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661627 containerd[1445]: time="2025-01-13T20:09:21.661613200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661698 containerd[1445]: time="2025-01-13T20:09:21.661684960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661763 containerd[1445]: time="2025-01-13T20:09:21.661750760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:09:21.661825 containerd[1445]: time="2025-01-13T20:09:21.661813440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661890 containerd[1445]: time="2025-01-13T20:09:21.661865880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.661938 containerd[1445]: time="2025-01-13T20:09:21.661927440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:09:21.662168 containerd[1445]: time="2025-01-13T20:09:21.662154680Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:09:21.662241 containerd[1445]: time="2025-01-13T20:09:21.662226000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:09:21.662290 containerd[1445]: time="2025-01-13T20:09:21.662279640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:09:21.662338 containerd[1445]: time="2025-01-13T20:09:21.662326560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:09:21.662383 containerd[1445]: time="2025-01-13T20:09:21.662372520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.662431 containerd[1445]: time="2025-01-13T20:09:21.662420840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:09:21.662475 containerd[1445]: time="2025-01-13T20:09:21.662466080Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:09:21.663618 containerd[1445]: time="2025-01-13T20:09:21.662512960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:09:21.663694 containerd[1445]: time="2025-01-13T20:09:21.662802240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:09:21.663694 containerd[1445]: time="2025-01-13T20:09:21.662849480Z" level=info msg="Connect containerd service" Jan 13 20:09:21.663694 containerd[1445]: time="2025-01-13T20:09:21.662883400Z" level=info msg="using legacy CRI server" Jan 13 20:09:21.663694 containerd[1445]: time="2025-01-13T20:09:21.662890120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:09:21.663694 containerd[1445]: time="2025-01-13T20:09:21.663140240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:09:21.664062 containerd[1445]: time="2025-01-13T20:09:21.664037440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:09:21.664415 containerd[1445]: time="2025-01-13T20:09:21.664332800Z" level=info msg="Start subscribing containerd event" Jan 13 20:09:21.664415 containerd[1445]: time="2025-01-13T20:09:21.664388560Z" level=info msg="Start recovering state" Jan 13 20:09:21.664466 containerd[1445]: time="2025-01-13T20:09:21.664449440Z" level=info msg="Start event monitor" Jan 13 20:09:21.664466 containerd[1445]: time="2025-01-13T20:09:21.664460280Z" level=info msg="Start snapshots syncer" Jan 13 20:09:21.664500 containerd[1445]: time="2025-01-13T20:09:21.664469480Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:09:21.664500 containerd[1445]: time="2025-01-13T20:09:21.664476840Z" level=info msg="Start streaming server" Jan 13 20:09:21.664810 containerd[1445]: time="2025-01-13T20:09:21.664791280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:09:21.664904 containerd[1445]: time="2025-01-13T20:09:21.664891120Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:09:21.664997 containerd[1445]: time="2025-01-13T20:09:21.664986040Z" level=info msg="containerd successfully booted in 0.041523s" Jan 13 20:09:21.665177 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:09:21.811014 tar[1437]: linux-arm64/LICENSE Jan 13 20:09:21.811112 tar[1437]: linux-arm64/README.md Jan 13 20:09:21.823836 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:09:22.199354 sshd_keygen[1436]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:09:22.217057 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:09:22.229103 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:09:22.233942 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:09:22.234161 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:09:22.237079 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:09:22.248384 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:09:22.251151 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:09:22.252977 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:09:22.254044 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:09:22.993695 systemd-networkd[1383]: eth0: Gained IPv6LL Jan 13 20:09:22.996194 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:09:22.997777 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:09:23.010905 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:09:23.012826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:23.014475 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:09:23.027290 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:09:23.027479 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:09:23.028797 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:09:23.030825 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:09:23.463176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:23.464572 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:09:23.467827 (kubelet)[1529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:09:23.468668 systemd[1]: Startup finished in 544ms (kernel) + 5.011s (initrd) + 3.782s (userspace) = 9.339s. Jan 13 20:09:23.914273 kubelet[1529]: E0113 20:09:23.912395 1529 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:09:23.916924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:09:23.917053 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:09:26.840138 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:09:26.841364 systemd[1]: Started sshd@0-10.0.0.49:22-10.0.0.1:58048.service - OpenSSH per-connection server daemon (10.0.0.1:58048). Jan 13 20:09:26.896791 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 58048 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:09:26.900142 sshd-session[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:26.910022 systemd-logind[1427]: New session 1 of user core. Jan 13 20:09:26.911030 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:09:26.921846 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:09:26.932706 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:09:26.934848 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:09:26.940763 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:09:27.010183 systemd[1547]: Queued start job for default target default.target. Jan 13 20:09:27.025489 systemd[1547]: Created slice app.slice - User Application Slice. Jan 13 20:09:27.025532 systemd[1547]: Reached target paths.target - Paths. Jan 13 20:09:27.025544 systemd[1547]: Reached target timers.target - Timers. Jan 13 20:09:27.026765 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:09:27.036185 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:09:27.036240 systemd[1547]: Reached target sockets.target - Sockets. Jan 13 20:09:27.036251 systemd[1547]: Reached target basic.target - Basic System. Jan 13 20:09:27.036296 systemd[1547]: Reached target default.target - Main User Target. Jan 13 20:09:27.036322 systemd[1547]: Startup finished in 90ms. Jan 13 20:09:27.036427 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:09:27.037587 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:09:27.093950 systemd[1]: Started sshd@1-10.0.0.49:22-10.0.0.1:58056.service - OpenSSH per-connection server daemon (10.0.0.1:58056). Jan 13 20:09:27.131404 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 58056 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:09:27.132653 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:27.136682 systemd-logind[1427]: New session 2 of user core. Jan 13 20:09:27.149764 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:09:27.199662 sshd[1560]: Connection closed by 10.0.0.1 port 58056 Jan 13 20:09:27.199573 sshd-session[1558]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:27.208797 systemd[1]: sshd@1-10.0.0.49:22-10.0.0.1:58056.service: Deactivated successfully. Jan 13 20:09:27.210137 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:09:27.212576 systemd-logind[1427]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:09:27.213653 systemd[1]: Started sshd@2-10.0.0.49:22-10.0.0.1:58062.service - OpenSSH per-connection server daemon (10.0.0.1:58062). Jan 13 20:09:27.214361 systemd-logind[1427]: Removed session 2. Jan 13 20:09:27.251249 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 58062 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:09:27.252425 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:27.255827 systemd-logind[1427]: New session 3 of user core. Jan 13 20:09:27.271727 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:09:27.317903 sshd[1567]: Connection closed by 10.0.0.1 port 58062 Jan 13 20:09:27.318526 sshd-session[1565]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:27.331237 systemd[1]: sshd@2-10.0.0.49:22-10.0.0.1:58062.service: Deactivated successfully. Jan 13 20:09:27.332768 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:09:27.335737 systemd-logind[1427]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:09:27.337047 systemd[1]: Started sshd@3-10.0.0.49:22-10.0.0.1:58078.service - OpenSSH per-connection server daemon (10.0.0.1:58078). Jan 13 20:09:27.337651 systemd-logind[1427]: Removed session 3. Jan 13 20:09:27.374669 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 58078 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:09:27.376077 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:27.379365 systemd-logind[1427]: New session 4 of user core. Jan 13 20:09:27.389800 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:09:27.439220 sshd[1574]: Connection closed by 10.0.0.1 port 58078 Jan 13 20:09:27.439627 sshd-session[1572]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:27.448818 systemd[1]: sshd@3-10.0.0.49:22-10.0.0.1:58078.service: Deactivated successfully. Jan 13 20:09:27.450121 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:09:27.451260 systemd-logind[1427]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:09:27.460939 systemd[1]: Started sshd@4-10.0.0.49:22-10.0.0.1:58094.service - OpenSSH per-connection server daemon (10.0.0.1:58094). Jan 13 20:09:27.461674 systemd-logind[1427]: Removed session 4. Jan 13 20:09:27.493525 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 58094 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:09:27.494521 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:27.497833 systemd-logind[1427]: New session 5 of user core. Jan 13 20:09:27.512713 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:09:27.576985 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:09:27.577265 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:09:27.591315 sudo[1582]: pam_unix(sudo:session): session closed for user root Jan 13 20:09:27.592631 sshd[1581]: Connection closed by 10.0.0.1 port 58094 Jan 13 20:09:27.593006 sshd-session[1579]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:27.605975 systemd[1]: sshd@4-10.0.0.49:22-10.0.0.1:58094.service: Deactivated successfully. Jan 13 20:09:27.607330 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:09:27.608560 systemd-logind[1427]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:09:27.609800 systemd[1]: Started sshd@5-10.0.0.49:22-10.0.0.1:58106.service - OpenSSH per-connection server daemon (10.0.0.1:58106). Jan 13 20:09:27.610508 systemd-logind[1427]: Removed session 5. Jan 13 20:09:27.647839 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 58106 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:09:27.648681 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:27.652431 systemd-logind[1427]: New session 6 of user core. Jan 13 20:09:27.667730 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:09:27.717734 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:09:27.718016 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:09:27.720844 sudo[1591]: pam_unix(sudo:session): session closed for user root Jan 13 20:09:27.725160 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:09:27.725660 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:09:27.742880 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:09:27.764581 augenrules[1613]: No rules Jan 13 20:09:27.765836 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:09:27.767633 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:09:27.768937 sudo[1590]: pam_unix(sudo:session): session closed for user root Jan 13 20:09:27.770087 sshd[1589]: Connection closed by 10.0.0.1 port 58106 Jan 13 20:09:27.770391 sshd-session[1587]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:27.776836 systemd[1]: sshd@5-10.0.0.49:22-10.0.0.1:58106.service: Deactivated successfully. Jan 13 20:09:27.778120 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:09:27.779304 systemd-logind[1427]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:09:27.780343 systemd[1]: Started sshd@6-10.0.0.49:22-10.0.0.1:58122.service - OpenSSH per-connection server daemon (10.0.0.1:58122). Jan 13 20:09:27.781023 systemd-logind[1427]: Removed session 6. Jan 13 20:09:27.817033 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 58122 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:09:27.818122 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:27.821954 systemd-logind[1427]: New session 7 of user core. Jan 13 20:09:27.837730 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:09:27.886923 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:09:27.887203 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:09:28.213836 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:09:28.213922 (dockerd)[1644]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:09:28.460671 dockerd[1644]: time="2025-01-13T20:09:28.460618409Z" level=info msg="Starting up" Jan 13 20:09:28.622089 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4180188759-merged.mount: Deactivated successfully. Jan 13 20:09:28.655112 dockerd[1644]: time="2025-01-13T20:09:28.655069451Z" level=info msg="Loading containers: start." Jan 13 20:09:28.790615 kernel: Initializing XFRM netlink socket Jan 13 20:09:28.865733 systemd-networkd[1383]: docker0: Link UP Jan 13 20:09:28.899944 dockerd[1644]: time="2025-01-13T20:09:28.899773744Z" level=info msg="Loading containers: done." Jan 13 20:09:28.914897 dockerd[1644]: time="2025-01-13T20:09:28.914826314Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:09:28.915045 dockerd[1644]: time="2025-01-13T20:09:28.914939506Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:09:28.915068 dockerd[1644]: time="2025-01-13T20:09:28.915043625Z" level=info msg="Daemon has completed initialization" Jan 13 20:09:28.947467 dockerd[1644]: time="2025-01-13T20:09:28.947412810Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:09:28.947661 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:09:29.597124 containerd[1445]: time="2025-01-13T20:09:29.597075864Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:09:29.620068 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1096566078-merged.mount: Deactivated successfully. Jan 13 20:09:30.302189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3099663098.mount: Deactivated successfully. Jan 13 20:09:32.155252 containerd[1445]: time="2025-01-13T20:09:32.154864752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:32.155252 containerd[1445]: time="2025-01-13T20:09:32.155193186Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201252" Jan 13 20:09:32.156269 containerd[1445]: time="2025-01-13T20:09:32.156236720Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:32.159444 containerd[1445]: time="2025-01-13T20:09:32.159400486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:32.160411 containerd[1445]: time="2025-01-13T20:09:32.160385471Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.56326669s" Jan 13 20:09:32.160470 containerd[1445]: time="2025-01-13T20:09:32.160413265Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 13 20:09:32.178787 containerd[1445]: time="2025-01-13T20:09:32.178754506Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:09:34.167352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:09:34.177755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:34.262430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:34.265913 (kubelet)[1914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:09:34.301605 kubelet[1914]: E0113 20:09:34.301548 1914 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:09:34.305012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:09:34.305158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:09:34.976741 containerd[1445]: time="2025-01-13T20:09:34.976691535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:34.977647 containerd[1445]: time="2025-01-13T20:09:34.977430822Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381299" Jan 13 20:09:34.978219 containerd[1445]: time="2025-01-13T20:09:34.978192960Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:34.981136 containerd[1445]: time="2025-01-13T20:09:34.981077280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:34.982300 containerd[1445]: time="2025-01-13T20:09:34.982239792Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 2.803445857s" Jan 13 20:09:34.982300 containerd[1445]: time="2025-01-13T20:09:34.982272820Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 13 20:09:35.010016 containerd[1445]: time="2025-01-13T20:09:35.009985181Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:09:36.643437 containerd[1445]: time="2025-01-13T20:09:36.643383551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:36.643898 containerd[1445]: time="2025-01-13T20:09:36.643851596Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765642" Jan 13 20:09:36.644788 containerd[1445]: time="2025-01-13T20:09:36.644760060Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:36.648067 containerd[1445]: time="2025-01-13T20:09:36.648008235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:36.649807 containerd[1445]: time="2025-01-13T20:09:36.649687578Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.639573123s" Jan 13 20:09:36.649807 containerd[1445]: time="2025-01-13T20:09:36.649720248Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 13 20:09:36.668249 containerd[1445]: time="2025-01-13T20:09:36.668210133Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:09:37.855643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1811478041.mount: Deactivated successfully. Jan 13 20:09:38.323777 containerd[1445]: time="2025-01-13T20:09:38.323646552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:38.324667 containerd[1445]: time="2025-01-13T20:09:38.324610295Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273979" Jan 13 20:09:38.325579 containerd[1445]: time="2025-01-13T20:09:38.325524291Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:38.329173 containerd[1445]: time="2025-01-13T20:09:38.329037193Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.660664995s" Jan 13 20:09:38.329173 containerd[1445]: time="2025-01-13T20:09:38.329079426Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 20:09:38.329559 containerd[1445]: time="2025-01-13T20:09:38.329516699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:38.348311 containerd[1445]: time="2025-01-13T20:09:38.348256394Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:09:38.913151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount331100070.mount: Deactivated successfully. Jan 13 20:09:39.575411 containerd[1445]: time="2025-01-13T20:09:39.575367980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:39.576513 containerd[1445]: time="2025-01-13T20:09:39.575682605Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 13 20:09:39.577188 containerd[1445]: time="2025-01-13T20:09:39.576848661Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:39.579941 containerd[1445]: time="2025-01-13T20:09:39.579899485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:39.582145 containerd[1445]: time="2025-01-13T20:09:39.582008980Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.233713568s" Jan 13 20:09:39.582145 containerd[1445]: time="2025-01-13T20:09:39.582044239Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:09:39.601174 containerd[1445]: time="2025-01-13T20:09:39.601128106Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:09:40.019962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3886220234.mount: Deactivated successfully. Jan 13 20:09:40.024097 containerd[1445]: time="2025-01-13T20:09:40.024049603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:40.024704 containerd[1445]: time="2025-01-13T20:09:40.024647759Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 13 20:09:40.025284 containerd[1445]: time="2025-01-13T20:09:40.025250096Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:40.027544 containerd[1445]: time="2025-01-13T20:09:40.027507287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:40.028489 containerd[1445]: time="2025-01-13T20:09:40.028450614Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 427.119623ms" Jan 13 20:09:40.028530 containerd[1445]: time="2025-01-13T20:09:40.028484940Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 20:09:40.047462 containerd[1445]: time="2025-01-13T20:09:40.047428533Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:09:40.705212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount582269165.mount: Deactivated successfully. Jan 13 20:09:44.149538 containerd[1445]: time="2025-01-13T20:09:44.149490282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:44.150490 containerd[1445]: time="2025-01-13T20:09:44.149971219Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jan 13 20:09:44.151118 containerd[1445]: time="2025-01-13T20:09:44.151075998Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:44.154506 containerd[1445]: time="2025-01-13T20:09:44.154457041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:44.156873 containerd[1445]: time="2025-01-13T20:09:44.156842738Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 4.109378868s" Jan 13 20:09:44.156873 containerd[1445]: time="2025-01-13T20:09:44.156874973Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 13 20:09:44.533670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:09:44.550845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:44.644653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:44.649173 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:09:44.695479 kubelet[2101]: E0113 20:09:44.695428 2101 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:09:44.698290 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:09:44.698425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:09:49.405888 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:49.421175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:49.443231 systemd[1]: Reloading requested from client PID 2162 ('systemctl') (unit session-7.scope)... Jan 13 20:09:49.443253 systemd[1]: Reloading... Jan 13 20:09:49.520667 zram_generator::config[2201]: No configuration found. Jan 13 20:09:49.635621 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:09:49.701883 systemd[1]: Reloading finished in 257 ms. Jan 13 20:09:49.752396 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:49.756016 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:09:49.756349 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:49.758333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:49.870059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:49.874195 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:09:49.911904 kubelet[2248]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:09:49.911904 kubelet[2248]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:09:49.911904 kubelet[2248]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:09:49.912231 kubelet[2248]: I0113 20:09:49.911960 2248 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:09:50.444513 kubelet[2248]: I0113 20:09:50.444466 2248 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:09:50.444513 kubelet[2248]: I0113 20:09:50.444499 2248 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:09:50.444771 kubelet[2248]: I0113 20:09:50.444743 2248 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:09:50.478623 kubelet[2248]: I0113 20:09:50.477790 2248 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:09:50.479967 kubelet[2248]: E0113 20:09:50.479935 2248 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:50.493208 kubelet[2248]: I0113 20:09:50.493175 2248 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:09:50.494391 kubelet[2248]: I0113 20:09:50.494363 2248 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:09:50.494746 kubelet[2248]: I0113 20:09:50.494721 2248 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:09:50.494903 kubelet[2248]: I0113 20:09:50.494879 2248 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:09:50.494957 kubelet[2248]: I0113 20:09:50.494949 2248 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:09:50.496214 kubelet[2248]: I0113 20:09:50.496185 2248 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:09:50.502641 kubelet[2248]: I0113 20:09:50.502605 2248 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:09:50.502804 kubelet[2248]: I0113 20:09:50.502792 2248 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:09:50.502885 kubelet[2248]: I0113 20:09:50.502874 2248 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:09:50.502941 kubelet[2248]: I0113 20:09:50.502932 2248 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:09:50.503320 kubelet[2248]: W0113 20:09:50.503274 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:50.503441 kubelet[2248]: E0113 20:09:50.503427 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:50.504065 kubelet[2248]: W0113 20:09:50.503976 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:50.504109 kubelet[2248]: E0113 20:09:50.504074 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:50.505444 kubelet[2248]: I0113 20:09:50.505422 2248 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:09:50.506106 kubelet[2248]: I0113 20:09:50.506090 2248 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:09:50.507955 kubelet[2248]: W0113 20:09:50.507921 2248 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:09:50.508949 kubelet[2248]: I0113 20:09:50.508930 2248 server.go:1256] "Started kubelet" Jan 13 20:09:50.509161 kubelet[2248]: I0113 20:09:50.509142 2248 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:09:50.511672 kubelet[2248]: I0113 20:09:50.510425 2248 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:09:50.511672 kubelet[2248]: I0113 20:09:50.511027 2248 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:09:50.511672 kubelet[2248]: I0113 20:09:50.511219 2248 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:09:50.512504 kubelet[2248]: I0113 20:09:50.512476 2248 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:09:50.513958 kubelet[2248]: I0113 20:09:50.513931 2248 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:09:50.514079 kubelet[2248]: I0113 20:09:50.514062 2248 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:09:50.514131 kubelet[2248]: I0113 20:09:50.514119 2248 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:09:50.514483 kubelet[2248]: W0113 20:09:50.514434 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:50.514536 kubelet[2248]: E0113 20:09:50.514524 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:50.514588 kubelet[2248]: E0113 20:09:50.514570 2248 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:09:50.514997 kubelet[2248]: E0113 20:09:50.514969 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="200ms" Jan 13 20:09:50.517341 kubelet[2248]: E0113 20:09:50.516394 2248 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a597adb8a4141 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:09:50.508900673 +0000 UTC m=+0.631346025,LastTimestamp:2025-01-13 20:09:50.508900673 +0000 UTC m=+0.631346025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:09:50.517341 kubelet[2248]: I0113 20:09:50.516780 2248 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:09:50.517341 kubelet[2248]: I0113 20:09:50.516901 2248 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:09:50.518792 kubelet[2248]: E0113 20:09:50.518755 2248 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:09:50.521243 kubelet[2248]: I0113 20:09:50.521206 2248 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:09:50.534174 kubelet[2248]: I0113 20:09:50.534121 2248 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:09:50.534174 kubelet[2248]: I0113 20:09:50.534145 2248 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:09:50.534174 kubelet[2248]: I0113 20:09:50.534163 2248 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:09:50.537513 kubelet[2248]: I0113 20:09:50.537461 2248 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:09:50.538798 kubelet[2248]: I0113 20:09:50.538747 2248 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:09:50.538798 kubelet[2248]: I0113 20:09:50.538784 2248 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:09:50.538798 kubelet[2248]: I0113 20:09:50.538805 2248 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:09:50.538932 kubelet[2248]: E0113 20:09:50.538874 2248 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:09:50.616392 kubelet[2248]: I0113 20:09:50.616366 2248 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:09:50.616877 kubelet[2248]: E0113 20:09:50.616848 2248 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Jan 13 20:09:50.639070 kubelet[2248]: E0113 20:09:50.639040 2248 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:09:50.715706 kubelet[2248]: E0113 20:09:50.715592 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="400ms" Jan 13 20:09:50.799409 kubelet[2248]: W0113 20:09:50.799352 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:50.799409 kubelet[2248]: E0113 20:09:50.799413 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:50.818686 kubelet[2248]: I0113 20:09:50.818656 2248 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:09:50.819002 kubelet[2248]: E0113 20:09:50.818975 2248 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Jan 13 20:09:50.840069 kubelet[2248]: E0113 20:09:50.840042 2248 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:09:50.864279 kubelet[2248]: I0113 20:09:50.864259 2248 policy_none.go:49] "None policy: Start" Jan 13 20:09:50.864975 kubelet[2248]: I0113 20:09:50.864957 2248 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:09:50.865025 kubelet[2248]: I0113 20:09:50.865003 2248 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:09:50.930467 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:09:50.946926 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:09:50.958625 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:09:50.959632 kubelet[2248]: I0113 20:09:50.959611 2248 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:09:50.959916 kubelet[2248]: I0113 20:09:50.959862 2248 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:09:50.960857 kubelet[2248]: E0113 20:09:50.960840 2248 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:09:51.116830 kubelet[2248]: E0113 20:09:51.116708 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="800ms" Jan 13 20:09:51.220308 kubelet[2248]: I0113 20:09:51.220271 2248 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:09:51.220653 kubelet[2248]: E0113 20:09:51.220633 2248 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Jan 13 20:09:51.240803 kubelet[2248]: I0113 20:09:51.240759 2248 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:09:51.241915 kubelet[2248]: I0113 20:09:51.241893 2248 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:09:51.242692 kubelet[2248]: I0113 20:09:51.242584 2248 topology_manager.go:215] "Topology Admit Handler" podUID="e1db3c2de1ead95e054c6c839f89c704" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:09:51.247133 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Jan 13 20:09:51.272039 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Jan 13 20:09:51.275895 systemd[1]: Created slice kubepods-burstable-pode1db3c2de1ead95e054c6c839f89c704.slice - libcontainer container kubepods-burstable-pode1db3c2de1ead95e054c6c839f89c704.slice. Jan 13 20:09:51.318611 kubelet[2248]: I0113 20:09:51.318532 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1db3c2de1ead95e054c6c839f89c704-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1db3c2de1ead95e054c6c839f89c704\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:09:51.318611 kubelet[2248]: I0113 20:09:51.318587 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1db3c2de1ead95e054c6c839f89c704-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1db3c2de1ead95e054c6c839f89c704\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:09:51.318611 kubelet[2248]: I0113 20:09:51.318631 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1db3c2de1ead95e054c6c839f89c704-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e1db3c2de1ead95e054c6c839f89c704\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:09:51.318811 kubelet[2248]: I0113 20:09:51.318653 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:09:51.318811 kubelet[2248]: I0113 20:09:51.318677 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:09:51.318811 kubelet[2248]: I0113 20:09:51.318718 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:09:51.318811 kubelet[2248]: I0113 20:09:51.318766 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:09:51.318893 kubelet[2248]: I0113 20:09:51.318819 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:09:51.318893 kubelet[2248]: I0113 20:09:51.318856 2248 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:09:51.407078 kubelet[2248]: W0113 20:09:51.406938 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:51.407078 kubelet[2248]: E0113 20:09:51.406995 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:51.457901 kubelet[2248]: W0113 20:09:51.457833 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:51.457901 kubelet[2248]: E0113 20:09:51.457913 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:51.572330 kubelet[2248]: E0113 20:09:51.572276 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:51.573053 containerd[1445]: time="2025-01-13T20:09:51.573009336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:51.574122 kubelet[2248]: E0113 20:09:51.574103 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:51.574690 containerd[1445]: time="2025-01-13T20:09:51.574454329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:51.578567 kubelet[2248]: E0113 20:09:51.578525 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:51.579015 containerd[1445]: time="2025-01-13T20:09:51.578980270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e1db3c2de1ead95e054c6c839f89c704,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:51.841005 kubelet[2248]: W0113 20:09:51.840876 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:51.841005 kubelet[2248]: E0113 20:09:51.840927 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:51.917564 kubelet[2248]: E0113 20:09:51.917519 2248 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.49:6443: connect: connection refused" interval="1.6s" Jan 13 20:09:52.022313 kubelet[2248]: I0113 20:09:52.022277 2248 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:09:52.022652 kubelet[2248]: E0113 20:09:52.022624 2248 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.49:6443/api/v1/nodes\": dial tcp 10.0.0.49:6443: connect: connection refused" node="localhost" Jan 13 20:09:52.079781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount104944438.mount: Deactivated successfully. Jan 13 20:09:52.083912 containerd[1445]: time="2025-01-13T20:09:52.083869581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:09:52.085903 containerd[1445]: time="2025-01-13T20:09:52.085852635Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:09:52.087522 containerd[1445]: time="2025-01-13T20:09:52.087478331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 20:09:52.088186 containerd[1445]: time="2025-01-13T20:09:52.088143172Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:09:52.089728 containerd[1445]: time="2025-01-13T20:09:52.089691218Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:09:52.091136 containerd[1445]: time="2025-01-13T20:09:52.091039883Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:09:52.091454 containerd[1445]: time="2025-01-13T20:09:52.091351483Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:09:52.094543 containerd[1445]: time="2025-01-13T20:09:52.094508919Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 519.984542ms" Jan 13 20:09:52.095610 containerd[1445]: time="2025-01-13T20:09:52.095570483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:09:52.096502 containerd[1445]: time="2025-01-13T20:09:52.096356016Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 517.315607ms" Jan 13 20:09:52.098184 containerd[1445]: time="2025-01-13T20:09:52.098034944Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 524.948087ms" Jan 13 20:09:52.217136 kubelet[2248]: W0113 20:09:52.217093 2248 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:52.217136 kubelet[2248]: E0113 20:09:52.217132 2248 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:52.252013 containerd[1445]: time="2025-01-13T20:09:52.251908484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:52.252013 containerd[1445]: time="2025-01-13T20:09:52.251987533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:52.252013 containerd[1445]: time="2025-01-13T20:09:52.252002639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:52.252354 containerd[1445]: time="2025-01-13T20:09:52.251981858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:52.252354 containerd[1445]: time="2025-01-13T20:09:52.252034850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:52.252354 containerd[1445]: time="2025-01-13T20:09:52.252050716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:52.252354 containerd[1445]: time="2025-01-13T20:09:52.252084566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:52.252354 containerd[1445]: time="2025-01-13T20:09:52.252123810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:52.253135 containerd[1445]: time="2025-01-13T20:09:52.253040305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:52.253243 containerd[1445]: time="2025-01-13T20:09:52.253208394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:52.253334 containerd[1445]: time="2025-01-13T20:09:52.253231613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:52.253487 containerd[1445]: time="2025-01-13T20:09:52.253451215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:52.280776 systemd[1]: Started cri-containerd-2d41570954d76e50af1dfdf88da81d140051b9ee07adb6eb2891d19a4d40aaca.scope - libcontainer container 2d41570954d76e50af1dfdf88da81d140051b9ee07adb6eb2891d19a4d40aaca. Jan 13 20:09:52.282153 systemd[1]: Started cri-containerd-59548773dca847378f20ce320b3a888adad8a4bee9190e61a8e464f5248db063.scope - libcontainer container 59548773dca847378f20ce320b3a888adad8a4bee9190e61a8e464f5248db063. Jan 13 20:09:52.283246 systemd[1]: Started cri-containerd-b364153c7bb017adef946577f7e6f04a6b97530704564ba58880d9f99ffff911.scope - libcontainer container b364153c7bb017adef946577f7e6f04a6b97530704564ba58880d9f99ffff911. Jan 13 20:09:52.311490 containerd[1445]: time="2025-01-13T20:09:52.311345155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"59548773dca847378f20ce320b3a888adad8a4bee9190e61a8e464f5248db063\"" Jan 13 20:09:52.312971 kubelet[2248]: E0113 20:09:52.312941 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:52.318937 containerd[1445]: time="2025-01-13T20:09:52.318858709Z" level=info msg="CreateContainer within sandbox \"59548773dca847378f20ce320b3a888adad8a4bee9190e61a8e464f5248db063\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:09:52.320602 containerd[1445]: time="2025-01-13T20:09:52.320487761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d41570954d76e50af1dfdf88da81d140051b9ee07adb6eb2891d19a4d40aaca\"" Jan 13 20:09:52.321263 kubelet[2248]: E0113 20:09:52.321208 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:52.323049 containerd[1445]: time="2025-01-13T20:09:52.323025596Z" level=info msg="CreateContainer within sandbox \"2d41570954d76e50af1dfdf88da81d140051b9ee07adb6eb2891d19a4d40aaca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:09:52.325217 containerd[1445]: time="2025-01-13T20:09:52.325188448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e1db3c2de1ead95e054c6c839f89c704,Namespace:kube-system,Attempt:0,} returns sandbox id \"b364153c7bb017adef946577f7e6f04a6b97530704564ba58880d9f99ffff911\"" Jan 13 20:09:52.326524 kubelet[2248]: E0113 20:09:52.326486 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:52.328316 containerd[1445]: time="2025-01-13T20:09:52.328280383Z" level=info msg="CreateContainer within sandbox \"b364153c7bb017adef946577f7e6f04a6b97530704564ba58880d9f99ffff911\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:09:52.338437 containerd[1445]: time="2025-01-13T20:09:52.338387521Z" level=info msg="CreateContainer within sandbox \"59548773dca847378f20ce320b3a888adad8a4bee9190e61a8e464f5248db063\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"655768fe373746c390601c876520307074452b5b21da6277edacbe6daacc162d\"" Jan 13 20:09:52.339152 containerd[1445]: time="2025-01-13T20:09:52.339115545Z" level=info msg="StartContainer for \"655768fe373746c390601c876520307074452b5b21da6277edacbe6daacc162d\"" Jan 13 20:09:52.341065 containerd[1445]: time="2025-01-13T20:09:52.340946936Z" level=info msg="CreateContainer within sandbox \"2d41570954d76e50af1dfdf88da81d140051b9ee07adb6eb2891d19a4d40aaca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c3635c1c9cf6016f53f3887a61c82c877ab19ab50f01cdc9ad757afec17d5342\"" Jan 13 20:09:52.341481 containerd[1445]: time="2025-01-13T20:09:52.341393933Z" level=info msg="StartContainer for \"c3635c1c9cf6016f53f3887a61c82c877ab19ab50f01cdc9ad757afec17d5342\"" Jan 13 20:09:52.344911 containerd[1445]: time="2025-01-13T20:09:52.344876996Z" level=info msg="CreateContainer within sandbox \"b364153c7bb017adef946577f7e6f04a6b97530704564ba58880d9f99ffff911\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cda74fb726369057e86b971ac4cf35d6f730f44122f99b81ad8b6cc265820ea6\"" Jan 13 20:09:52.346215 containerd[1445]: time="2025-01-13T20:09:52.346192172Z" level=info msg="StartContainer for \"cda74fb726369057e86b971ac4cf35d6f730f44122f99b81ad8b6cc265820ea6\"" Jan 13 20:09:52.369768 systemd[1]: Started cri-containerd-655768fe373746c390601c876520307074452b5b21da6277edacbe6daacc162d.scope - libcontainer container 655768fe373746c390601c876520307074452b5b21da6277edacbe6daacc162d. Jan 13 20:09:52.370941 systemd[1]: Started cri-containerd-c3635c1c9cf6016f53f3887a61c82c877ab19ab50f01cdc9ad757afec17d5342.scope - libcontainer container c3635c1c9cf6016f53f3887a61c82c877ab19ab50f01cdc9ad757afec17d5342. Jan 13 20:09:52.374363 systemd[1]: Started cri-containerd-cda74fb726369057e86b971ac4cf35d6f730f44122f99b81ad8b6cc265820ea6.scope - libcontainer container cda74fb726369057e86b971ac4cf35d6f730f44122f99b81ad8b6cc265820ea6. Jan 13 20:09:52.404365 containerd[1445]: time="2025-01-13T20:09:52.404304156Z" level=info msg="StartContainer for \"655768fe373746c390601c876520307074452b5b21da6277edacbe6daacc162d\" returns successfully" Jan 13 20:09:52.498667 kubelet[2248]: E0113 20:09:52.498634 2248 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.49:6443: connect: connection refused Jan 13 20:09:52.535825 containerd[1445]: time="2025-01-13T20:09:52.535779149Z" level=info msg="StartContainer for \"cda74fb726369057e86b971ac4cf35d6f730f44122f99b81ad8b6cc265820ea6\" returns successfully" Jan 13 20:09:52.535992 containerd[1445]: time="2025-01-13T20:09:52.535872145Z" level=info msg="StartContainer for \"c3635c1c9cf6016f53f3887a61c82c877ab19ab50f01cdc9ad757afec17d5342\" returns successfully" Jan 13 20:09:52.542901 kubelet[2248]: E0113 20:09:52.542866 2248 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.49:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.49:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a597adb8a4141 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:09:50.508900673 +0000 UTC m=+0.631346025,LastTimestamp:2025-01-13 20:09:50.508900673 +0000 UTC m=+0.631346025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:09:52.551401 kubelet[2248]: E0113 20:09:52.551170 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:52.553212 kubelet[2248]: E0113 20:09:52.553068 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:52.556563 kubelet[2248]: E0113 20:09:52.556503 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:53.558374 kubelet[2248]: E0113 20:09:53.558341 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:53.558794 kubelet[2248]: E0113 20:09:53.558771 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:53.624237 kubelet[2248]: I0113 20:09:53.624209 2248 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:09:53.857465 kubelet[2248]: E0113 20:09:53.857136 2248 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:09:53.927437 kubelet[2248]: I0113 20:09:53.927397 2248 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:09:53.933957 kubelet[2248]: E0113 20:09:53.933931 2248 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:09:54.034985 kubelet[2248]: E0113 20:09:54.034940 2248 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:09:54.135728 kubelet[2248]: E0113 20:09:54.135268 2248 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:09:54.235736 kubelet[2248]: E0113 20:09:54.235699 2248 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:09:54.336177 kubelet[2248]: E0113 20:09:54.336144 2248 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:09:54.436680 kubelet[2248]: E0113 20:09:54.436603 2248 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:09:54.506105 kubelet[2248]: I0113 20:09:54.506076 2248 apiserver.go:52] "Watching apiserver" Jan 13 20:09:54.515195 kubelet[2248]: I0113 20:09:54.515151 2248 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:09:54.562952 kubelet[2248]: E0113 20:09:54.562922 2248 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 13 20:09:54.564627 kubelet[2248]: E0113 20:09:54.563353 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:56.495145 systemd[1]: Reloading requested from client PID 2531 ('systemctl') (unit session-7.scope)... Jan 13 20:09:56.495160 systemd[1]: Reloading... Jan 13 20:09:56.547638 zram_generator::config[2570]: No configuration found. Jan 13 20:09:56.628575 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:09:56.662975 kubelet[2248]: E0113 20:09:56.662911 2248 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:56.691545 systemd[1]: Reloading finished in 196 ms. Jan 13 20:09:56.721868 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:56.738453 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:09:56.738693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:56.753854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:56.838374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:56.841977 (kubelet)[2612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:09:56.880833 kubelet[2612]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:09:56.880833 kubelet[2612]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:09:56.880833 kubelet[2612]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:09:56.881156 kubelet[2612]: I0113 20:09:56.880858 2612 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:09:56.885009 kubelet[2612]: I0113 20:09:56.884984 2612 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:09:56.885009 kubelet[2612]: I0113 20:09:56.885011 2612 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:09:56.885207 kubelet[2612]: I0113 20:09:56.885192 2612 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:09:56.886620 kubelet[2612]: I0113 20:09:56.886589 2612 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:09:56.888500 kubelet[2612]: I0113 20:09:56.888410 2612 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:09:56.895652 kubelet[2612]: I0113 20:09:56.894841 2612 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:09:56.895652 kubelet[2612]: I0113 20:09:56.895024 2612 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:09:56.895652 kubelet[2612]: I0113 20:09:56.895187 2612 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:09:56.895652 kubelet[2612]: I0113 20:09:56.895206 2612 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:09:56.895652 kubelet[2612]: I0113 20:09:56.895214 2612 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:09:56.895652 kubelet[2612]: I0113 20:09:56.895241 2612 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:09:56.895870 kubelet[2612]: I0113 20:09:56.895323 2612 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:09:56.895870 kubelet[2612]: I0113 20:09:56.895335 2612 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:09:56.895870 kubelet[2612]: I0113 20:09:56.895355 2612 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:09:56.895870 kubelet[2612]: I0113 20:09:56.895368 2612 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:09:56.896141 kubelet[2612]: I0113 20:09:56.896109 2612 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:09:56.896299 kubelet[2612]: I0113 20:09:56.896282 2612 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:09:56.900457 kubelet[2612]: I0113 20:09:56.900436 2612 server.go:1256] "Started kubelet" Jan 13 20:09:56.901679 kubelet[2612]: I0113 20:09:56.901510 2612 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:09:56.901592 sudo[2627]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:09:56.901881 sudo[2627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:09:56.905315 kubelet[2612]: I0113 20:09:56.905300 2612 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:09:56.906087 kubelet[2612]: I0113 20:09:56.905962 2612 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:09:56.906087 kubelet[2612]: I0113 20:09:56.905986 2612 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:09:56.906235 kubelet[2612]: I0113 20:09:56.906225 2612 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:09:56.907039 kubelet[2612]: I0113 20:09:56.906659 2612 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:09:56.914638 kubelet[2612]: I0113 20:09:56.906696 2612 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:09:56.914638 kubelet[2612]: I0113 20:09:56.910370 2612 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:09:56.914638 kubelet[2612]: E0113 20:09:56.911640 2612 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:09:56.918645 kubelet[2612]: I0113 20:09:56.916252 2612 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:09:56.918645 kubelet[2612]: I0113 20:09:56.916345 2612 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:09:56.918645 kubelet[2612]: E0113 20:09:56.917245 2612 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:09:56.918645 kubelet[2612]: I0113 20:09:56.917360 2612 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:09:56.936841 kubelet[2612]: I0113 20:09:56.936819 2612 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:09:56.942620 kubelet[2612]: I0113 20:09:56.940101 2612 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:09:56.942620 kubelet[2612]: I0113 20:09:56.940120 2612 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:09:56.942620 kubelet[2612]: I0113 20:09:56.940136 2612 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:09:56.942620 kubelet[2612]: E0113 20:09:56.940178 2612 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:09:56.960644 kubelet[2612]: I0113 20:09:56.960629 2612 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:09:56.960730 kubelet[2612]: I0113 20:09:56.960722 2612 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:09:56.960796 kubelet[2612]: I0113 20:09:56.960788 2612 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:09:56.960963 kubelet[2612]: I0113 20:09:56.960954 2612 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:09:56.961037 kubelet[2612]: I0113 20:09:56.961028 2612 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:09:56.961096 kubelet[2612]: I0113 20:09:56.961087 2612 policy_none.go:49] "None policy: Start" Jan 13 20:09:56.961688 kubelet[2612]: I0113 20:09:56.961672 2612 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:09:56.961739 kubelet[2612]: I0113 20:09:56.961697 2612 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:09:56.961859 kubelet[2612]: I0113 20:09:56.961835 2612 state_mem.go:75] "Updated machine memory state" Jan 13 20:09:56.965472 kubelet[2612]: I0113 20:09:56.965446 2612 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:09:56.965862 kubelet[2612]: I0113 20:09:56.965677 2612 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:09:57.016021 kubelet[2612]: I0113 20:09:57.014930 2612 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:09:57.041329 kubelet[2612]: I0113 20:09:57.040305 2612 topology_manager.go:215] "Topology Admit Handler" podUID="e1db3c2de1ead95e054c6c839f89c704" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:09:57.041329 kubelet[2612]: I0113 20:09:57.040393 2612 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:09:57.041329 kubelet[2612]: I0113 20:09:57.040452 2612 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:09:57.051280 kubelet[2612]: E0113 20:09:57.051254 2612 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 13 20:09:57.052218 kubelet[2612]: I0113 20:09:57.052198 2612 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 20:09:57.052284 kubelet[2612]: I0113 20:09:57.052267 2612 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:09:57.108248 kubelet[2612]: I0113 20:09:57.108218 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1db3c2de1ead95e054c6c839f89c704-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e1db3c2de1ead95e054c6c839f89c704\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:09:57.108324 kubelet[2612]: I0113 20:09:57.108256 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:09:57.108324 kubelet[2612]: I0113 20:09:57.108279 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:09:57.108324 kubelet[2612]: I0113 20:09:57.108299 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:09:57.108403 kubelet[2612]: I0113 20:09:57.108330 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1db3c2de1ead95e054c6c839f89c704-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1db3c2de1ead95e054c6c839f89c704\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:09:57.108403 kubelet[2612]: I0113 20:09:57.108351 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1db3c2de1ead95e054c6c839f89c704-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e1db3c2de1ead95e054c6c839f89c704\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:09:57.108403 kubelet[2612]: I0113 20:09:57.108375 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:09:57.108463 kubelet[2612]: I0113 20:09:57.108403 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:09:57.108629 kubelet[2612]: I0113 20:09:57.108523 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:09:57.338942 sudo[2627]: pam_unix(sudo:session): session closed for user root Jan 13 20:09:57.352537 kubelet[2612]: E0113 20:09:57.351952 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:57.352537 kubelet[2612]: E0113 20:09:57.352126 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:57.352537 kubelet[2612]: E0113 20:09:57.352478 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:57.895923 kubelet[2612]: I0113 20:09:57.895881 2612 apiserver.go:52] "Watching apiserver" Jan 13 20:09:57.906423 kubelet[2612]: I0113 20:09:57.906383 2612 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:09:57.952561 kubelet[2612]: E0113 20:09:57.952115 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:57.952561 kubelet[2612]: E0113 20:09:57.952515 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:57.953248 kubelet[2612]: E0113 20:09:57.952835 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:57.973317 kubelet[2612]: I0113 20:09:57.973223 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.973161796 podStartE2EDuration="973.161796ms" podCreationTimestamp="2025-01-13 20:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:57.966569049 +0000 UTC m=+1.121539713" watchObservedRunningTime="2025-01-13 20:09:57.973161796 +0000 UTC m=+1.128132340" Jan 13 20:09:57.980609 kubelet[2612]: I0113 20:09:57.980290 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.980161673 podStartE2EDuration="980.161673ms" podCreationTimestamp="2025-01-13 20:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:57.973327193 +0000 UTC m=+1.128297737" watchObservedRunningTime="2025-01-13 20:09:57.980161673 +0000 UTC m=+1.135132217" Jan 13 20:09:57.980609 kubelet[2612]: I0113 20:09:57.980385 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.980365839 podStartE2EDuration="1.980365839s" podCreationTimestamp="2025-01-13 20:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:57.980041286 +0000 UTC m=+1.135011830" watchObservedRunningTime="2025-01-13 20:09:57.980365839 +0000 UTC m=+1.135336383" Jan 13 20:09:58.608423 sudo[1624]: pam_unix(sudo:session): session closed for user root Jan 13 20:09:58.609942 sshd[1623]: Connection closed by 10.0.0.1 port 58122 Jan 13 20:09:58.610466 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:58.613907 systemd[1]: sshd@6-10.0.0.49:22-10.0.0.1:58122.service: Deactivated successfully. Jan 13 20:09:58.615361 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:09:58.615520 systemd[1]: session-7.scope: Consumed 7.210s CPU time, 191.4M memory peak, 0B memory swap peak. Jan 13 20:09:58.615960 systemd-logind[1427]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:09:58.616967 systemd-logind[1427]: Removed session 7. Jan 13 20:09:58.953008 kubelet[2612]: E0113 20:09:58.952893 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:09:58.953008 kubelet[2612]: E0113 20:09:58.952895 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:04.518620 kubelet[2612]: E0113 20:10:04.518485 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:04.960289 kubelet[2612]: E0113 20:10:04.960249 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:06.047540 kubelet[2612]: E0113 20:10:06.047206 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:06.962429 kubelet[2612]: E0113 20:10:06.962391 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:07.139186 update_engine[1433]: I20250113 20:10:07.139117 1433 update_attempter.cc:509] Updating boot flags... Jan 13 20:10:07.178617 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2695) Jan 13 20:10:07.199661 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2697) Jan 13 20:10:07.963336 kubelet[2612]: E0113 20:10:07.963309 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:08.595875 kubelet[2612]: E0113 20:10:08.595847 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:12.454706 kubelet[2612]: I0113 20:10:12.454671 2612 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:10:12.455341 containerd[1445]: time="2025-01-13T20:10:12.455012265Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:10:12.455625 kubelet[2612]: I0113 20:10:12.455608 2612 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:10:13.348415 kubelet[2612]: I0113 20:10:13.348364 2612 topology_manager.go:215] "Topology Admit Handler" podUID="212efb9d-c29d-487c-9d85-f1ece3cfa7d6" podNamespace="kube-system" podName="kube-proxy-4sqn6" Jan 13 20:10:13.355636 kubelet[2612]: I0113 20:10:13.355558 2612 topology_manager.go:215] "Topology Admit Handler" podUID="20e1414f-f785-4c00-9011-da60587c11f6" podNamespace="kube-system" podName="cilium-b6xv6" Jan 13 20:10:13.362536 systemd[1]: Created slice kubepods-besteffort-pod212efb9d_c29d_487c_9d85_f1ece3cfa7d6.slice - libcontainer container kubepods-besteffort-pod212efb9d_c29d_487c_9d85_f1ece3cfa7d6.slice. Jan 13 20:10:13.376721 systemd[1]: Created slice kubepods-burstable-pod20e1414f_f785_4c00_9011_da60587c11f6.slice - libcontainer container kubepods-burstable-pod20e1414f_f785_4c00_9011_da60587c11f6.slice. Jan 13 20:10:13.427622 kubelet[2612]: I0113 20:10:13.427566 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/212efb9d-c29d-487c-9d85-f1ece3cfa7d6-kube-proxy\") pod \"kube-proxy-4sqn6\" (UID: \"212efb9d-c29d-487c-9d85-f1ece3cfa7d6\") " pod="kube-system/kube-proxy-4sqn6" Jan 13 20:10:13.427814 kubelet[2612]: I0113 20:10:13.427800 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-xtables-lock\") pod \"cilium-b6xv6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " pod="kube-system/cilium-b6xv6" Jan 13 20:10:13.427891 kubelet[2612]: I0113 20:10:13.427881 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20e1414f-f785-4c00-9011-da60587c11f6-clustermesh-secrets\") pod \"cilium-b6xv6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " pod="kube-system/cilium-b6xv6" Jan 13 20:10:13.428021 kubelet[2612]: I0113 20:10:13.427979 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-cilium-cgroup\") pod \"cilium-b6xv6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " pod="kube-system/cilium-b6xv6" Jan 13 20:10:13.428055 kubelet[2612]: I0113 20:10:13.428026 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-lib-modules\") pod \"cilium-b6xv6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " pod="kube-system/cilium-b6xv6" Jan 13 20:10:13.428082 kubelet[2612]: I0113 20:10:13.428063 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-host-proc-sys-kernel\") pod \"cilium-b6xv6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " pod="kube-system/cilium-b6xv6" Jan 13 20:10:13.428104 kubelet[2612]: I0113 20:10:13.428090 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct99g\" (UniqueName: \"kubernetes.io/projected/20e1414f-f785-4c00-9011-da60587c11f6-kube-api-access-ct99g\") pod \"cilium-b6xv6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " pod="kube-system/cilium-b6xv6" Jan 13 20:10:13.428126 kubelet[2612]: I0113 20:10:13.428116 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/212efb9d-c29d-487c-9d85-f1ece3cfa7d6-xtables-lock\") pod \"kube-proxy-4sqn6\" (UID: \"212efb9d-c29d-487c-9d85-f1ece3cfa7d6\") " pod="kube-system/kube-proxy-4sqn6" Jan 13 20:10:13.428161 kubelet[2612]: I0113 20:10:13.428136 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-cni-path\") pod \"cilium-b6xv6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " pod="kube-system/cilium-b6xv6" Jan 13 20:10:13.428188 kubelet[2612]: I0113 20:10:13.428171 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/212efb9d-c29d-487c-9d85-f1ece3cfa7d6-lib-modules\") pod \"kube-proxy-4sqn6\" (UID: \"212efb9d-c29d-487c-9d85-f1ece3cfa7d6\") " pod="kube-system/kube-proxy-4sqn6" Jan 13 20:10:13.428458 kubelet[2612]: I0113 20:10:13.428219 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-bpf-maps\") pod \"cilium-b6xv6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " pod="kube-system/cilium-b6xv6" Jan 13 20:10:13.428458 kubelet[2612]: I0113 20:10:13.428263 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-hostproc\") pod \"cilium-b6xv6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " pod="kube-system/cilium-b6xv6" Jan 13 20:10:13.428458 kubelet[2612]: I0113 20:10:13.428304 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-etc-cni-netd\") pod \"cilium-b6xv6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " pod="kube-system/cilium-b6xv6" Jan 13 20:10:13.428458 kubelet[2612]: I0113 20:10:13.428332 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20e1414f-f785-4c00-9011-da60587c11f6-hubble-tls\") pod \"cilium-b6xv6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " pod="kube-system/cilium-b6xv6" Jan 13 20:10:13.428458 kubelet[2612]: I0113 20:10:13.428353 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-cilium-run\") pod \"cilium-b6xv6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " pod="kube-system/cilium-b6xv6" Jan 13 20:10:13.428458 kubelet[2612]: I0113 20:10:13.428374 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-host-proc-sys-net\") pod \"cilium-b6xv6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " pod="kube-system/cilium-b6xv6" Jan 13 20:10:13.428641 kubelet[2612]: I0113 20:10:13.428395 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jnnc\" (UniqueName: \"kubernetes.io/projected/212efb9d-c29d-487c-9d85-f1ece3cfa7d6-kube-api-access-4jnnc\") pod \"kube-proxy-4sqn6\" (UID: \"212efb9d-c29d-487c-9d85-f1ece3cfa7d6\") " pod="kube-system/kube-proxy-4sqn6" Jan 13 20:10:13.428641 kubelet[2612]: I0113 20:10:13.428413 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20e1414f-f785-4c00-9011-da60587c11f6-cilium-config-path\") pod \"cilium-b6xv6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " pod="kube-system/cilium-b6xv6" Jan 13 20:10:13.562683 kubelet[2612]: I0113 20:10:13.562648 2612 topology_manager.go:215] "Topology Admit Handler" podUID="244a5a16-e215-4461-95cd-0b6d95e31d0e" podNamespace="kube-system" podName="cilium-operator-5cc964979-mwbs9" Jan 13 20:10:13.573707 systemd[1]: Created slice kubepods-besteffort-pod244a5a16_e215_4461_95cd_0b6d95e31d0e.slice - libcontainer container kubepods-besteffort-pod244a5a16_e215_4461_95cd_0b6d95e31d0e.slice. Jan 13 20:10:13.630052 kubelet[2612]: I0113 20:10:13.629937 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmn5m\" (UniqueName: \"kubernetes.io/projected/244a5a16-e215-4461-95cd-0b6d95e31d0e-kube-api-access-vmn5m\") pod \"cilium-operator-5cc964979-mwbs9\" (UID: \"244a5a16-e215-4461-95cd-0b6d95e31d0e\") " pod="kube-system/cilium-operator-5cc964979-mwbs9" Jan 13 20:10:13.630052 kubelet[2612]: I0113 20:10:13.629983 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/244a5a16-e215-4461-95cd-0b6d95e31d0e-cilium-config-path\") pod \"cilium-operator-5cc964979-mwbs9\" (UID: \"244a5a16-e215-4461-95cd-0b6d95e31d0e\") " pod="kube-system/cilium-operator-5cc964979-mwbs9" Jan 13 20:10:13.675353 kubelet[2612]: E0113 20:10:13.675308 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:13.676085 containerd[1445]: time="2025-01-13T20:10:13.675939787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4sqn6,Uid:212efb9d-c29d-487c-9d85-f1ece3cfa7d6,Namespace:kube-system,Attempt:0,}" Jan 13 20:10:13.679743 kubelet[2612]: E0113 20:10:13.679722 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:13.680426 containerd[1445]: time="2025-01-13T20:10:13.680190404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b6xv6,Uid:20e1414f-f785-4c00-9011-da60587c11f6,Namespace:kube-system,Attempt:0,}" Jan 13 20:10:13.706107 containerd[1445]: time="2025-01-13T20:10:13.705985414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:10:13.706107 containerd[1445]: time="2025-01-13T20:10:13.706047101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:10:13.706107 containerd[1445]: time="2025-01-13T20:10:13.706062302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:13.706421 containerd[1445]: time="2025-01-13T20:10:13.706346210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:13.708200 containerd[1445]: time="2025-01-13T20:10:13.707555889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:10:13.708200 containerd[1445]: time="2025-01-13T20:10:13.708007413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:10:13.708200 containerd[1445]: time="2025-01-13T20:10:13.708020934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:13.708200 containerd[1445]: time="2025-01-13T20:10:13.708103102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:13.727759 systemd[1]: Started cri-containerd-d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca.scope - libcontainer container d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca. Jan 13 20:10:13.731141 systemd[1]: Started cri-containerd-8a4d6ff96fc988941905fe8d987b78748736922fa1a76836785241dbf59cc2ac.scope - libcontainer container 8a4d6ff96fc988941905fe8d987b78748736922fa1a76836785241dbf59cc2ac. Jan 13 20:10:13.752900 containerd[1445]: time="2025-01-13T20:10:13.752864093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b6xv6,Uid:20e1414f-f785-4c00-9011-da60587c11f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca\"" Jan 13 20:10:13.754607 kubelet[2612]: E0113 20:10:13.754578 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:13.757273 containerd[1445]: time="2025-01-13T20:10:13.757211600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4sqn6,Uid:212efb9d-c29d-487c-9d85-f1ece3cfa7d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a4d6ff96fc988941905fe8d987b78748736922fa1a76836785241dbf59cc2ac\"" Jan 13 20:10:13.757895 kubelet[2612]: E0113 20:10:13.757870 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:13.763192 containerd[1445]: time="2025-01-13T20:10:13.763155263Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:10:13.764150 containerd[1445]: time="2025-01-13T20:10:13.764048590Z" level=info msg="CreateContainer within sandbox \"8a4d6ff96fc988941905fe8d987b78748736922fa1a76836785241dbf59cc2ac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:10:13.776297 containerd[1445]: time="2025-01-13T20:10:13.776247547Z" level=info msg="CreateContainer within sandbox \"8a4d6ff96fc988941905fe8d987b78748736922fa1a76836785241dbf59cc2ac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d35b21876a3400daa7b4e8448aaec4b4f07c76f4e8feee832c573fe98d24d0e9\"" Jan 13 20:10:13.777326 containerd[1445]: time="2025-01-13T20:10:13.776962657Z" level=info msg="StartContainer for \"d35b21876a3400daa7b4e8448aaec4b4f07c76f4e8feee832c573fe98d24d0e9\"" Jan 13 20:10:13.817789 systemd[1]: Started cri-containerd-d35b21876a3400daa7b4e8448aaec4b4f07c76f4e8feee832c573fe98d24d0e9.scope - libcontainer container d35b21876a3400daa7b4e8448aaec4b4f07c76f4e8feee832c573fe98d24d0e9. Jan 13 20:10:13.847276 containerd[1445]: time="2025-01-13T20:10:13.847231750Z" level=info msg="StartContainer for \"d35b21876a3400daa7b4e8448aaec4b4f07c76f4e8feee832c573fe98d24d0e9\" returns successfully" Jan 13 20:10:13.878760 kubelet[2612]: E0113 20:10:13.878720 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:13.879482 containerd[1445]: time="2025-01-13T20:10:13.879449311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-mwbs9,Uid:244a5a16-e215-4461-95cd-0b6d95e31d0e,Namespace:kube-system,Attempt:0,}" Jan 13 20:10:13.904222 containerd[1445]: time="2025-01-13T20:10:13.903951915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:10:13.904222 containerd[1445]: time="2025-01-13T20:10:13.904039283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:10:13.904222 containerd[1445]: time="2025-01-13T20:10:13.904055805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:13.904992 containerd[1445]: time="2025-01-13T20:10:13.904181457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:13.927779 systemd[1]: Started cri-containerd-2ea650c6e5e5869c70013d72b51b940e42a850b3dd55d9fb4aaeebe30501db25.scope - libcontainer container 2ea650c6e5e5869c70013d72b51b940e42a850b3dd55d9fb4aaeebe30501db25. Jan 13 20:10:13.965561 containerd[1445]: time="2025-01-13T20:10:13.965097273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-mwbs9,Uid:244a5a16-e215-4461-95cd-0b6d95e31d0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ea650c6e5e5869c70013d72b51b940e42a850b3dd55d9fb4aaeebe30501db25\"" Jan 13 20:10:13.965906 kubelet[2612]: E0113 20:10:13.965883 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:13.974199 kubelet[2612]: E0113 20:10:13.974144 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:13.985156 kubelet[2612]: I0113 20:10:13.985119 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4sqn6" podStartSLOduration=0.985067952 podStartE2EDuration="985.067952ms" podCreationTimestamp="2025-01-13 20:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:10:13.984693955 +0000 UTC m=+17.139664499" watchObservedRunningTime="2025-01-13 20:10:13.985067952 +0000 UTC m=+17.140038496" Jan 13 20:10:19.791676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2430178746.mount: Deactivated successfully. Jan 13 20:10:21.106834 containerd[1445]: time="2025-01-13T20:10:21.106758120Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650910" Jan 13 20:10:21.108458 containerd[1445]: time="2025-01-13T20:10:21.108394755Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:21.109860 containerd[1445]: time="2025-01-13T20:10:21.109826296Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.346496856s" Jan 13 20:10:21.109916 containerd[1445]: time="2025-01-13T20:10:21.109862218Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:10:21.110497 containerd[1445]: time="2025-01-13T20:10:21.110470221Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:21.111914 containerd[1445]: time="2025-01-13T20:10:21.111728630Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:10:21.119959 containerd[1445]: time="2025-01-13T20:10:21.119095588Z" level=info msg="CreateContainer within sandbox \"d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:10:21.153934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount810129706.mount: Deactivated successfully. Jan 13 20:10:21.154469 containerd[1445]: time="2025-01-13T20:10:21.154434354Z" level=info msg="CreateContainer within sandbox \"d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb\"" Jan 13 20:10:21.168226 containerd[1445]: time="2025-01-13T20:10:21.168179522Z" level=info msg="StartContainer for \"b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb\"" Jan 13 20:10:21.199768 systemd[1]: Started cri-containerd-b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb.scope - libcontainer container b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb. Jan 13 20:10:21.252656 containerd[1445]: time="2025-01-13T20:10:21.252519255Z" level=info msg="StartContainer for \"b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb\" returns successfully" Jan 13 20:10:21.286671 systemd[1]: cri-containerd-b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb.scope: Deactivated successfully. Jan 13 20:10:21.330160 containerd[1445]: time="2025-01-13T20:10:21.324338469Z" level=info msg="shim disconnected" id=b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb namespace=k8s.io Jan 13 20:10:21.330160 containerd[1445]: time="2025-01-13T20:10:21.330157198Z" level=warning msg="cleaning up after shim disconnected" id=b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb namespace=k8s.io Jan 13 20:10:21.330160 containerd[1445]: time="2025-01-13T20:10:21.330170159Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:22.004008 kubelet[2612]: E0113 20:10:22.003327 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:22.007538 containerd[1445]: time="2025-01-13T20:10:22.005750881Z" level=info msg="CreateContainer within sandbox \"d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:10:22.016516 containerd[1445]: time="2025-01-13T20:10:22.016478648Z" level=info msg="CreateContainer within sandbox \"d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65\"" Jan 13 20:10:22.022739 containerd[1445]: time="2025-01-13T20:10:22.022703830Z" level=info msg="StartContainer for \"99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65\"" Jan 13 20:10:22.044823 systemd[1]: Started cri-containerd-99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65.scope - libcontainer container 99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65. Jan 13 20:10:22.063071 containerd[1445]: time="2025-01-13T20:10:22.063036604Z" level=info msg="StartContainer for \"99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65\" returns successfully" Jan 13 20:10:22.081783 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:10:22.081999 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:10:22.082060 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:10:22.089855 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:10:22.090030 systemd[1]: cri-containerd-99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65.scope: Deactivated successfully. Jan 13 20:10:22.108179 containerd[1445]: time="2025-01-13T20:10:22.108118821Z" level=info msg="shim disconnected" id=99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65 namespace=k8s.io Jan 13 20:10:22.108179 containerd[1445]: time="2025-01-13T20:10:22.108169184Z" level=warning msg="cleaning up after shim disconnected" id=99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65 namespace=k8s.io Jan 13 20:10:22.108179 containerd[1445]: time="2025-01-13T20:10:22.108177385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:22.119222 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:10:22.151802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb-rootfs.mount: Deactivated successfully. Jan 13 20:10:22.328203 systemd[1]: Started sshd@7-10.0.0.49:22-10.0.0.1:34648.service - OpenSSH per-connection server daemon (10.0.0.1:34648). Jan 13 20:10:22.376071 sshd[3148]: Accepted publickey for core from 10.0.0.1 port 34648 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:10:22.377422 sshd-session[3148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:22.381646 systemd-logind[1427]: New session 8 of user core. Jan 13 20:10:22.395798 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:10:22.497821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount420170686.mount: Deactivated successfully. Jan 13 20:10:22.527625 sshd[3150]: Connection closed by 10.0.0.1 port 34648 Jan 13 20:10:22.527572 sshd-session[3148]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:22.530837 systemd[1]: sshd@7-10.0.0.49:22-10.0.0.1:34648.service: Deactivated successfully. Jan 13 20:10:22.532808 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:10:22.534036 systemd-logind[1427]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:10:22.535899 systemd-logind[1427]: Removed session 8. Jan 13 20:10:23.003210 kubelet[2612]: E0113 20:10:23.002946 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:23.007570 containerd[1445]: time="2025-01-13T20:10:23.007527900Z" level=info msg="CreateContainer within sandbox \"d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:10:23.022915 containerd[1445]: time="2025-01-13T20:10:23.022851662Z" level=info msg="CreateContainer within sandbox \"d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1\"" Jan 13 20:10:23.023489 containerd[1445]: time="2025-01-13T20:10:23.023458061Z" level=info msg="StartContainer for \"99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1\"" Jan 13 20:10:23.050789 systemd[1]: Started cri-containerd-99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1.scope - libcontainer container 99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1. Jan 13 20:10:23.074709 containerd[1445]: time="2025-01-13T20:10:23.074604606Z" level=info msg="StartContainer for \"99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1\" returns successfully" Jan 13 20:10:23.101924 systemd[1]: cri-containerd-99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1.scope: Deactivated successfully. Jan 13 20:10:23.131741 containerd[1445]: time="2025-01-13T20:10:23.130956730Z" level=info msg="shim disconnected" id=99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1 namespace=k8s.io Jan 13 20:10:23.131741 containerd[1445]: time="2025-01-13T20:10:23.131744422Z" level=warning msg="cleaning up after shim disconnected" id=99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1 namespace=k8s.io Jan 13 20:10:23.132208 containerd[1445]: time="2025-01-13T20:10:23.131757823Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:24.018606 kubelet[2612]: E0113 20:10:24.018564 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:24.036458 containerd[1445]: time="2025-01-13T20:10:24.036306612Z" level=info msg="CreateContainer within sandbox \"d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:10:24.048685 containerd[1445]: time="2025-01-13T20:10:24.048640910Z" level=info msg="CreateContainer within sandbox \"d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e\"" Jan 13 20:10:24.049300 containerd[1445]: time="2025-01-13T20:10:24.049273110Z" level=info msg="StartContainer for \"45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e\"" Jan 13 20:10:24.079769 systemd[1]: Started cri-containerd-45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e.scope - libcontainer container 45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e. Jan 13 20:10:24.101355 systemd[1]: cri-containerd-45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e.scope: Deactivated successfully. Jan 13 20:10:24.104129 containerd[1445]: time="2025-01-13T20:10:24.104094371Z" level=info msg="StartContainer for \"45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e\" returns successfully" Jan 13 20:10:24.125646 containerd[1445]: time="2025-01-13T20:10:24.125556886Z" level=info msg="shim disconnected" id=45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e namespace=k8s.io Jan 13 20:10:24.125646 containerd[1445]: time="2025-01-13T20:10:24.125639892Z" level=warning msg="cleaning up after shim disconnected" id=45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e namespace=k8s.io Jan 13 20:10:24.125646 containerd[1445]: time="2025-01-13T20:10:24.125648892Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:24.151550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e-rootfs.mount: Deactivated successfully. Jan 13 20:10:25.021191 kubelet[2612]: E0113 20:10:25.021032 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:25.023661 containerd[1445]: time="2025-01-13T20:10:25.023625257Z" level=info msg="CreateContainer within sandbox \"d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:10:25.035257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968817295.mount: Deactivated successfully. Jan 13 20:10:25.039381 containerd[1445]: time="2025-01-13T20:10:25.039332375Z" level=info msg="CreateContainer within sandbox \"d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e\"" Jan 13 20:10:25.040434 containerd[1445]: time="2025-01-13T20:10:25.040116863Z" level=info msg="StartContainer for \"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e\"" Jan 13 20:10:25.068770 systemd[1]: Started cri-containerd-a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e.scope - libcontainer container a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e. Jan 13 20:10:25.096496 containerd[1445]: time="2025-01-13T20:10:25.093311469Z" level=info msg="StartContainer for \"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e\" returns successfully" Jan 13 20:10:25.264469 kubelet[2612]: I0113 20:10:25.264433 2612 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:10:25.300667 kubelet[2612]: I0113 20:10:25.300539 2612 topology_manager.go:215] "Topology Admit Handler" podUID="74231419-983f-4d43-bfa8-5f9016720667" podNamespace="kube-system" podName="coredns-76f75df574-gmjh6" Jan 13 20:10:25.300771 kubelet[2612]: I0113 20:10:25.300728 2612 topology_manager.go:215] "Topology Admit Handler" podUID="2c0d2896-4e3d-4481-9b6a-ce0f8e16a5d0" podNamespace="kube-system" podName="coredns-76f75df574-wd96c" Jan 13 20:10:25.314455 systemd[1]: Created slice kubepods-burstable-pod74231419_983f_4d43_bfa8_5f9016720667.slice - libcontainer container kubepods-burstable-pod74231419_983f_4d43_bfa8_5f9016720667.slice. Jan 13 20:10:25.324037 systemd[1]: Created slice kubepods-burstable-pod2c0d2896_4e3d_4481_9b6a_ce0f8e16a5d0.slice - libcontainer container kubepods-burstable-pod2c0d2896_4e3d_4481_9b6a_ce0f8e16a5d0.slice. Jan 13 20:10:25.406434 kubelet[2612]: I0113 20:10:25.406336 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84rdd\" (UniqueName: \"kubernetes.io/projected/74231419-983f-4d43-bfa8-5f9016720667-kube-api-access-84rdd\") pod \"coredns-76f75df574-gmjh6\" (UID: \"74231419-983f-4d43-bfa8-5f9016720667\") " pod="kube-system/coredns-76f75df574-gmjh6" Jan 13 20:10:25.406434 kubelet[2612]: I0113 20:10:25.406402 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74231419-983f-4d43-bfa8-5f9016720667-config-volume\") pod \"coredns-76f75df574-gmjh6\" (UID: \"74231419-983f-4d43-bfa8-5f9016720667\") " pod="kube-system/coredns-76f75df574-gmjh6" Jan 13 20:10:25.406591 kubelet[2612]: I0113 20:10:25.406492 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c0d2896-4e3d-4481-9b6a-ce0f8e16a5d0-config-volume\") pod \"coredns-76f75df574-wd96c\" (UID: \"2c0d2896-4e3d-4481-9b6a-ce0f8e16a5d0\") " pod="kube-system/coredns-76f75df574-wd96c" Jan 13 20:10:25.406591 kubelet[2612]: I0113 20:10:25.406542 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxmps\" (UniqueName: \"kubernetes.io/projected/2c0d2896-4e3d-4481-9b6a-ce0f8e16a5d0-kube-api-access-qxmps\") pod \"coredns-76f75df574-wd96c\" (UID: \"2c0d2896-4e3d-4481-9b6a-ce0f8e16a5d0\") " pod="kube-system/coredns-76f75df574-wd96c" Jan 13 20:10:25.621083 kubelet[2612]: E0113 20:10:25.621048 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:25.622695 containerd[1445]: time="2025-01-13T20:10:25.621820078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gmjh6,Uid:74231419-983f-4d43-bfa8-5f9016720667,Namespace:kube-system,Attempt:0,}" Jan 13 20:10:25.627682 kubelet[2612]: E0113 20:10:25.627643 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:25.628133 containerd[1445]: time="2025-01-13T20:10:25.628094981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wd96c,Uid:2c0d2896-4e3d-4481-9b6a-ce0f8e16a5d0,Namespace:kube-system,Attempt:0,}" Jan 13 20:10:25.873551 containerd[1445]: time="2025-01-13T20:10:25.873428911Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:25.874656 containerd[1445]: time="2025-01-13T20:10:25.874592702Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137106" Jan 13 20:10:25.875690 containerd[1445]: time="2025-01-13T20:10:25.875653846Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:25.877467 containerd[1445]: time="2025-01-13T20:10:25.877173979Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.765408347s" Jan 13 20:10:25.877467 containerd[1445]: time="2025-01-13T20:10:25.877210741Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:10:25.879134 containerd[1445]: time="2025-01-13T20:10:25.879012451Z" level=info msg="CreateContainer within sandbox \"2ea650c6e5e5869c70013d72b51b940e42a850b3dd55d9fb4aaeebe30501db25\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:10:25.897271 containerd[1445]: time="2025-01-13T20:10:25.897219082Z" level=info msg="CreateContainer within sandbox \"2ea650c6e5e5869c70013d72b51b940e42a850b3dd55d9fb4aaeebe30501db25\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410\"" Jan 13 20:10:25.897772 containerd[1445]: time="2025-01-13T20:10:25.897748475Z" level=info msg="StartContainer for \"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410\"" Jan 13 20:10:25.925801 systemd[1]: Started cri-containerd-5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410.scope - libcontainer container 5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410. Jan 13 20:10:25.959071 containerd[1445]: time="2025-01-13T20:10:25.959027494Z" level=info msg="StartContainer for \"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410\" returns successfully" Jan 13 20:10:26.036161 kubelet[2612]: E0113 20:10:26.034668 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:26.046090 kubelet[2612]: I0113 20:10:26.045785 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-mwbs9" podStartSLOduration=1.134603291 podStartE2EDuration="13.045747536s" podCreationTimestamp="2025-01-13 20:10:13 +0000 UTC" firstStartedPulling="2025-01-13 20:10:13.966305311 +0000 UTC m=+17.121275935" lastFinishedPulling="2025-01-13 20:10:25.877449636 +0000 UTC m=+29.032420180" observedRunningTime="2025-01-13 20:10:26.044548105 +0000 UTC m=+29.199518649" watchObservedRunningTime="2025-01-13 20:10:26.045747536 +0000 UTC m=+29.200718040" Jan 13 20:10:26.046090 kubelet[2612]: E0113 20:10:26.045917 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:27.047492 kubelet[2612]: E0113 20:10:27.047450 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:27.047848 kubelet[2612]: E0113 20:10:27.047792 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:27.540059 systemd[1]: Started sshd@8-10.0.0.49:22-10.0.0.1:45670.service - OpenSSH per-connection server daemon (10.0.0.1:45670). Jan 13 20:10:27.619247 sshd[3484]: Accepted publickey for core from 10.0.0.1 port 45670 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:10:27.619892 sshd-session[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:27.623293 systemd-logind[1427]: New session 9 of user core. Jan 13 20:10:27.631775 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:10:27.751832 sshd[3486]: Connection closed by 10.0.0.1 port 45670 Jan 13 20:10:27.752175 sshd-session[3484]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:27.755837 systemd[1]: sshd@8-10.0.0.49:22-10.0.0.1:45670.service: Deactivated successfully. Jan 13 20:10:27.758182 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:10:27.759208 systemd-logind[1427]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:10:27.762034 systemd-logind[1427]: Removed session 9. Jan 13 20:10:28.049029 kubelet[2612]: E0113 20:10:28.048964 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:30.134357 systemd-networkd[1383]: cilium_host: Link UP Jan 13 20:10:30.134478 systemd-networkd[1383]: cilium_net: Link UP Jan 13 20:10:30.135099 systemd-networkd[1383]: cilium_net: Gained carrier Jan 13 20:10:30.135286 systemd-networkd[1383]: cilium_host: Gained carrier Jan 13 20:10:30.221158 systemd-networkd[1383]: cilium_vxlan: Link UP Jan 13 20:10:30.221164 systemd-networkd[1383]: cilium_vxlan: Gained carrier Jan 13 20:10:30.553488 kernel: NET: Registered PF_ALG protocol family Jan 13 20:10:30.553708 systemd-networkd[1383]: cilium_host: Gained IPv6LL Jan 13 20:10:31.089739 systemd-networkd[1383]: cilium_net: Gained IPv6LL Jan 13 20:10:31.229015 systemd-networkd[1383]: lxc_health: Link UP Jan 13 20:10:31.236409 systemd-networkd[1383]: lxc_health: Gained carrier Jan 13 20:10:31.686621 kubelet[2612]: E0113 20:10:31.686141 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:31.712701 kubelet[2612]: I0113 20:10:31.712654 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-b6xv6" podStartSLOduration=11.363196464 podStartE2EDuration="18.712580524s" podCreationTimestamp="2025-01-13 20:10:13 +0000 UTC" firstStartedPulling="2025-01-13 20:10:13.761874537 +0000 UTC m=+16.916845041" lastFinishedPulling="2025-01-13 20:10:21.111258477 +0000 UTC m=+24.266229101" observedRunningTime="2025-01-13 20:10:26.068283306 +0000 UTC m=+29.223253850" watchObservedRunningTime="2025-01-13 20:10:31.712580524 +0000 UTC m=+34.867551068" Jan 13 20:10:31.789749 systemd-networkd[1383]: lxc4e1b39e597d3: Link UP Jan 13 20:10:31.796790 kernel: eth0: renamed from tmp933bc Jan 13 20:10:31.814629 kernel: eth0: renamed from tmp72193 Jan 13 20:10:31.818073 systemd-networkd[1383]: lxc4e1b39e597d3: Gained carrier Jan 13 20:10:31.818317 systemd-networkd[1383]: lxc898016b192f2: Link UP Jan 13 20:10:31.819686 systemd-networkd[1383]: lxc898016b192f2: Gained carrier Jan 13 20:10:31.858781 systemd-networkd[1383]: cilium_vxlan: Gained IPv6LL Jan 13 20:10:32.055286 kubelet[2612]: E0113 20:10:32.055188 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:32.766254 systemd[1]: Started sshd@9-10.0.0.49:22-10.0.0.1:33028.service - OpenSSH per-connection server daemon (10.0.0.1:33028). Jan 13 20:10:32.813726 sshd[3873]: Accepted publickey for core from 10.0.0.1 port 33028 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:10:32.815004 sshd-session[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:32.820268 systemd-logind[1427]: New session 10 of user core. Jan 13 20:10:32.823769 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:10:32.940580 sshd[3875]: Connection closed by 10.0.0.1 port 33028 Jan 13 20:10:32.940922 sshd-session[3873]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:32.946547 systemd[1]: sshd@9-10.0.0.49:22-10.0.0.1:33028.service: Deactivated successfully. Jan 13 20:10:32.948262 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:10:32.950130 systemd-logind[1427]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:10:32.950898 systemd-logind[1427]: Removed session 10. Jan 13 20:10:33.056658 kubelet[2612]: E0113 20:10:33.056373 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:33.074771 systemd-networkd[1383]: lxc_health: Gained IPv6LL Jan 13 20:10:33.265769 systemd-networkd[1383]: lxc4e1b39e597d3: Gained IPv6LL Jan 13 20:10:33.714838 systemd-networkd[1383]: lxc898016b192f2: Gained IPv6LL Jan 13 20:10:35.405905 containerd[1445]: time="2025-01-13T20:10:35.405762072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:10:35.405905 containerd[1445]: time="2025-01-13T20:10:35.405822955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:10:35.405905 containerd[1445]: time="2025-01-13T20:10:35.405839556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:35.406388 containerd[1445]: time="2025-01-13T20:10:35.405961002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:35.406785 containerd[1445]: time="2025-01-13T20:10:35.406710596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:10:35.406785 containerd[1445]: time="2025-01-13T20:10:35.406749398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:10:35.406785 containerd[1445]: time="2025-01-13T20:10:35.406759758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:35.406943 containerd[1445]: time="2025-01-13T20:10:35.406813321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:35.423213 systemd[1]: Started cri-containerd-721930edf61e9d826d574f4bf30a09e9643f59f04f51e97f64bdb15975195e38.scope - libcontainer container 721930edf61e9d826d574f4bf30a09e9643f59f04f51e97f64bdb15975195e38. Jan 13 20:10:35.428278 systemd[1]: Started cri-containerd-933bcc2b09aadd1fc9492bca5717c1866344a8635496ac244ab5ad6a7c5b81a1.scope - libcontainer container 933bcc2b09aadd1fc9492bca5717c1866344a8635496ac244ab5ad6a7c5b81a1. Jan 13 20:10:35.437118 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:10:35.440350 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:10:35.455969 containerd[1445]: time="2025-01-13T20:10:35.455929657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gmjh6,Uid:74231419-983f-4d43-bfa8-5f9016720667,Namespace:kube-system,Attempt:0,} returns sandbox id \"721930edf61e9d826d574f4bf30a09e9643f59f04f51e97f64bdb15975195e38\"" Jan 13 20:10:35.457652 kubelet[2612]: E0113 20:10:35.456892 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:35.458966 containerd[1445]: time="2025-01-13T20:10:35.458822630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wd96c,Uid:2c0d2896-4e3d-4481-9b6a-ce0f8e16a5d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"933bcc2b09aadd1fc9492bca5717c1866344a8635496ac244ab5ad6a7c5b81a1\"" Jan 13 20:10:35.460548 containerd[1445]: time="2025-01-13T20:10:35.460520628Z" level=info msg="CreateContainer within sandbox \"721930edf61e9d826d574f4bf30a09e9643f59f04f51e97f64bdb15975195e38\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:10:35.461241 kubelet[2612]: E0113 20:10:35.461222 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:35.463421 containerd[1445]: time="2025-01-13T20:10:35.463317837Z" level=info msg="CreateContainer within sandbox \"933bcc2b09aadd1fc9492bca5717c1866344a8635496ac244ab5ad6a7c5b81a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:10:35.476109 containerd[1445]: time="2025-01-13T20:10:35.476077503Z" level=info msg="CreateContainer within sandbox \"721930edf61e9d826d574f4bf30a09e9643f59f04f51e97f64bdb15975195e38\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4d4f3cbcccce53cdd809475044b05588e00c530d2c3bc76b13740addb8f98afa\"" Jan 13 20:10:35.478313 containerd[1445]: time="2025-01-13T20:10:35.477459286Z" level=info msg="StartContainer for \"4d4f3cbcccce53cdd809475044b05588e00c530d2c3bc76b13740addb8f98afa\"" Jan 13 20:10:35.479543 containerd[1445]: time="2025-01-13T20:10:35.479501780Z" level=info msg="CreateContainer within sandbox \"933bcc2b09aadd1fc9492bca5717c1866344a8635496ac244ab5ad6a7c5b81a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14ab1ede83875604f746aac5567c4db237b4823b8d95a4cff738e95075f934dd\"" Jan 13 20:10:35.480110 containerd[1445]: time="2025-01-13T20:10:35.480076486Z" level=info msg="StartContainer for \"14ab1ede83875604f746aac5567c4db237b4823b8d95a4cff738e95075f934dd\"" Jan 13 20:10:35.509009 systemd[1]: Started cri-containerd-14ab1ede83875604f746aac5567c4db237b4823b8d95a4cff738e95075f934dd.scope - libcontainer container 14ab1ede83875604f746aac5567c4db237b4823b8d95a4cff738e95075f934dd. Jan 13 20:10:35.510103 systemd[1]: Started cri-containerd-4d4f3cbcccce53cdd809475044b05588e00c530d2c3bc76b13740addb8f98afa.scope - libcontainer container 4d4f3cbcccce53cdd809475044b05588e00c530d2c3bc76b13740addb8f98afa. Jan 13 20:10:35.534450 containerd[1445]: time="2025-01-13T20:10:35.534380781Z" level=info msg="StartContainer for \"4d4f3cbcccce53cdd809475044b05588e00c530d2c3bc76b13740addb8f98afa\" returns successfully" Jan 13 20:10:35.541232 containerd[1445]: time="2025-01-13T20:10:35.541191574Z" level=info msg="StartContainer for \"14ab1ede83875604f746aac5567c4db237b4823b8d95a4cff738e95075f934dd\" returns successfully" Jan 13 20:10:36.066481 kubelet[2612]: E0113 20:10:36.065866 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:36.070465 kubelet[2612]: E0113 20:10:36.070435 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:36.084972 kubelet[2612]: I0113 20:10:36.084675 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gmjh6" podStartSLOduration=23.084635333 podStartE2EDuration="23.084635333s" podCreationTimestamp="2025-01-13 20:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:10:36.083519242 +0000 UTC m=+39.238489787" watchObservedRunningTime="2025-01-13 20:10:36.084635333 +0000 UTC m=+39.239605877" Jan 13 20:10:36.412719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount724010528.mount: Deactivated successfully. Jan 13 20:10:37.071307 kubelet[2612]: E0113 20:10:37.071277 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:37.071800 kubelet[2612]: E0113 20:10:37.071337 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:37.957177 systemd[1]: Started sshd@10-10.0.0.49:22-10.0.0.1:33042.service - OpenSSH per-connection server daemon (10.0.0.1:33042). Jan 13 20:10:38.001909 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 33042 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:10:38.003296 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:38.006725 systemd-logind[1427]: New session 11 of user core. Jan 13 20:10:38.021804 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:10:38.073090 kubelet[2612]: E0113 20:10:38.073014 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:38.073090 kubelet[2612]: E0113 20:10:38.073039 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:10:38.135627 sshd[4069]: Connection closed by 10.0.0.1 port 33042 Jan 13 20:10:38.136147 sshd-session[4067]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:38.138419 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:10:38.139632 systemd-logind[1427]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:10:38.139871 systemd[1]: sshd@10-10.0.0.49:22-10.0.0.1:33042.service: Deactivated successfully. Jan 13 20:10:38.142807 systemd-logind[1427]: Removed session 11. Jan 13 20:10:43.146940 systemd[1]: Started sshd@11-10.0.0.49:22-10.0.0.1:39930.service - OpenSSH per-connection server daemon (10.0.0.1:39930). Jan 13 20:10:43.193246 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 39930 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:10:43.194340 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:43.197965 systemd-logind[1427]: New session 12 of user core. Jan 13 20:10:43.204733 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:10:43.313043 sshd[4084]: Connection closed by 10.0.0.1 port 39930 Jan 13 20:10:43.313490 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:43.322019 systemd[1]: sshd@11-10.0.0.49:22-10.0.0.1:39930.service: Deactivated successfully. Jan 13 20:10:43.323404 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:10:43.325712 systemd-logind[1427]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:10:43.331811 systemd[1]: Started sshd@12-10.0.0.49:22-10.0.0.1:39938.service - OpenSSH per-connection server daemon (10.0.0.1:39938). Jan 13 20:10:43.332550 systemd-logind[1427]: Removed session 12. Jan 13 20:10:43.366520 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 39938 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:10:43.367920 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:43.371386 systemd-logind[1427]: New session 13 of user core. Jan 13 20:10:43.381721 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:10:43.535181 sshd[4099]: Connection closed by 10.0.0.1 port 39938 Jan 13 20:10:43.531371 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:43.550313 systemd[1]: sshd@12-10.0.0.49:22-10.0.0.1:39938.service: Deactivated successfully. Jan 13 20:10:43.552992 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:10:43.554621 systemd-logind[1427]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:10:43.563919 systemd[1]: Started sshd@13-10.0.0.49:22-10.0.0.1:39944.service - OpenSSH per-connection server daemon (10.0.0.1:39944). Jan 13 20:10:43.565054 systemd-logind[1427]: Removed session 13. Jan 13 20:10:43.599068 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 39944 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:10:43.600230 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:43.603931 systemd-logind[1427]: New session 14 of user core. Jan 13 20:10:43.613754 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:10:43.725890 sshd[4112]: Connection closed by 10.0.0.1 port 39944 Jan 13 20:10:43.726452 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:43.729110 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:10:43.730366 systemd-logind[1427]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:10:43.730566 systemd[1]: sshd@13-10.0.0.49:22-10.0.0.1:39944.service: Deactivated successfully. Jan 13 20:10:43.732579 systemd-logind[1427]: Removed session 14. Jan 13 20:10:48.737061 systemd[1]: Started sshd@14-10.0.0.49:22-10.0.0.1:39950.service - OpenSSH per-connection server daemon (10.0.0.1:39950). Jan 13 20:10:48.775280 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 39950 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:10:48.776492 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:48.779913 systemd-logind[1427]: New session 15 of user core. Jan 13 20:10:48.790751 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:10:48.900548 sshd[4130]: Connection closed by 10.0.0.1 port 39950 Jan 13 20:10:48.900888 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:48.904786 systemd[1]: sshd@14-10.0.0.49:22-10.0.0.1:39950.service: Deactivated successfully. Jan 13 20:10:48.907064 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:10:48.908145 systemd-logind[1427]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:10:48.909269 systemd-logind[1427]: Removed session 15. Jan 13 20:10:53.912237 systemd[1]: Started sshd@15-10.0.0.49:22-10.0.0.1:50960.service - OpenSSH per-connection server daemon (10.0.0.1:50960). Jan 13 20:10:53.957981 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 50960 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:10:53.959463 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:53.965829 systemd-logind[1427]: New session 16 of user core. Jan 13 20:10:53.980798 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:10:54.114947 sshd[4144]: Connection closed by 10.0.0.1 port 50960 Jan 13 20:10:54.115366 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:54.127395 systemd[1]: sshd@15-10.0.0.49:22-10.0.0.1:50960.service: Deactivated successfully. Jan 13 20:10:54.130150 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:10:54.132247 systemd-logind[1427]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:10:54.147945 systemd[1]: Started sshd@16-10.0.0.49:22-10.0.0.1:50970.service - OpenSSH per-connection server daemon (10.0.0.1:50970). Jan 13 20:10:54.148911 systemd-logind[1427]: Removed session 16. Jan 13 20:10:54.183421 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 50970 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:10:54.184747 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:54.188809 systemd-logind[1427]: New session 17 of user core. Jan 13 20:10:54.202808 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:10:54.402099 sshd[4158]: Connection closed by 10.0.0.1 port 50970 Jan 13 20:10:54.402760 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:54.413222 systemd[1]: sshd@16-10.0.0.49:22-10.0.0.1:50970.service: Deactivated successfully. Jan 13 20:10:54.416182 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:10:54.418775 systemd-logind[1427]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:10:54.428153 systemd[1]: Started sshd@17-10.0.0.49:22-10.0.0.1:50978.service - OpenSSH per-connection server daemon (10.0.0.1:50978). Jan 13 20:10:54.429284 systemd-logind[1427]: Removed session 17. Jan 13 20:10:54.471637 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 50978 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:10:54.472902 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:54.476730 systemd-logind[1427]: New session 18 of user core. Jan 13 20:10:54.488764 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:10:55.787435 sshd[4170]: Connection closed by 10.0.0.1 port 50978 Jan 13 20:10:55.788106 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:55.795441 systemd[1]: sshd@17-10.0.0.49:22-10.0.0.1:50978.service: Deactivated successfully. Jan 13 20:10:55.798025 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:10:55.799487 systemd-logind[1427]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:10:55.803297 systemd[1]: Started sshd@18-10.0.0.49:22-10.0.0.1:50980.service - OpenSSH per-connection server daemon (10.0.0.1:50980). Jan 13 20:10:55.805056 systemd-logind[1427]: Removed session 18. Jan 13 20:10:55.852091 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 50980 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:10:55.853370 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:55.857278 systemd-logind[1427]: New session 19 of user core. Jan 13 20:10:55.868765 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:10:56.082154 sshd[4190]: Connection closed by 10.0.0.1 port 50980 Jan 13 20:10:56.083162 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:56.092858 systemd[1]: sshd@18-10.0.0.49:22-10.0.0.1:50980.service: Deactivated successfully. Jan 13 20:10:56.094420 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:10:56.098834 systemd-logind[1427]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:10:56.109912 systemd[1]: Started sshd@19-10.0.0.49:22-10.0.0.1:50990.service - OpenSSH per-connection server daemon (10.0.0.1:50990). Jan 13 20:10:56.110876 systemd-logind[1427]: Removed session 19. Jan 13 20:10:56.145810 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 50990 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:10:56.147072 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:56.151218 systemd-logind[1427]: New session 20 of user core. Jan 13 20:10:56.161803 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:10:56.279150 sshd[4203]: Connection closed by 10.0.0.1 port 50990 Jan 13 20:10:56.279523 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:56.283051 systemd[1]: sshd@19-10.0.0.49:22-10.0.0.1:50990.service: Deactivated successfully. Jan 13 20:10:56.286227 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:10:56.288244 systemd-logind[1427]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:10:56.289188 systemd-logind[1427]: Removed session 20. Jan 13 20:11:01.290033 systemd[1]: Started sshd@20-10.0.0.49:22-10.0.0.1:51000.service - OpenSSH per-connection server daemon (10.0.0.1:51000). Jan 13 20:11:01.328354 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 51000 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:11:01.329503 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:01.333141 systemd-logind[1427]: New session 21 of user core. Jan 13 20:11:01.343757 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:11:01.450082 sshd[4222]: Connection closed by 10.0.0.1 port 51000 Jan 13 20:11:01.450429 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:01.453410 systemd[1]: sshd@20-10.0.0.49:22-10.0.0.1:51000.service: Deactivated successfully. Jan 13 20:11:01.455082 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:11:01.457211 systemd-logind[1427]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:11:01.458380 systemd-logind[1427]: Removed session 21. Jan 13 20:11:06.461475 systemd[1]: Started sshd@21-10.0.0.49:22-10.0.0.1:39004.service - OpenSSH per-connection server daemon (10.0.0.1:39004). Jan 13 20:11:06.514819 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 39004 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:11:06.516001 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:06.520469 systemd-logind[1427]: New session 22 of user core. Jan 13 20:11:06.530749 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:11:06.649752 sshd[4237]: Connection closed by 10.0.0.1 port 39004 Jan 13 20:11:06.650057 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:06.653483 systemd[1]: sshd@21-10.0.0.49:22-10.0.0.1:39004.service: Deactivated successfully. Jan 13 20:11:06.656564 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:11:06.657878 systemd-logind[1427]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:11:06.659337 systemd-logind[1427]: Removed session 22. Jan 13 20:11:11.661203 systemd[1]: Started sshd@22-10.0.0.49:22-10.0.0.1:39008.service - OpenSSH per-connection server daemon (10.0.0.1:39008). Jan 13 20:11:11.700112 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 39008 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:11:11.701356 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:11.705197 systemd-logind[1427]: New session 23 of user core. Jan 13 20:11:11.714765 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:11:11.837615 sshd[4251]: Connection closed by 10.0.0.1 port 39008 Jan 13 20:11:11.838207 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:11.849106 systemd[1]: sshd@22-10.0.0.49:22-10.0.0.1:39008.service: Deactivated successfully. Jan 13 20:11:11.852715 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:11:11.854871 systemd-logind[1427]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:11:11.856073 systemd[1]: Started sshd@23-10.0.0.49:22-10.0.0.1:39010.service - OpenSSH per-connection server daemon (10.0.0.1:39010). Jan 13 20:11:11.856704 systemd-logind[1427]: Removed session 23. Jan 13 20:11:11.895772 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 39010 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:11:11.897064 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:11.900979 systemd-logind[1427]: New session 24 of user core. Jan 13 20:11:11.912819 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:11:13.902805 kubelet[2612]: I0113 20:11:13.901124 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wd96c" podStartSLOduration=60.901073697 podStartE2EDuration="1m0.901073697s" podCreationTimestamp="2025-01-13 20:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:10:36.108353838 +0000 UTC m=+39.263324382" watchObservedRunningTime="2025-01-13 20:11:13.901073697 +0000 UTC m=+77.056044241" Jan 13 20:11:13.909572 containerd[1445]: time="2025-01-13T20:11:13.908946435Z" level=info msg="StopContainer for \"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410\" with timeout 30 (s)" Jan 13 20:11:13.910260 containerd[1445]: time="2025-01-13T20:11:13.909750357Z" level=info msg="Stop container \"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410\" with signal terminated" Jan 13 20:11:13.920471 systemd[1]: cri-containerd-5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410.scope: Deactivated successfully. Jan 13 20:11:13.943219 systemd[1]: run-containerd-runc-k8s.io-a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e-runc.IpQK9b.mount: Deactivated successfully. Jan 13 20:11:13.958797 containerd[1445]: time="2025-01-13T20:11:13.958755988Z" level=info msg="StopContainer for \"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e\" with timeout 2 (s)" Jan 13 20:11:13.959208 containerd[1445]: time="2025-01-13T20:11:13.959092869Z" level=info msg="Stop container \"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e\" with signal terminated" Jan 13 20:11:13.965781 systemd-networkd[1383]: lxc_health: Link DOWN Jan 13 20:11:13.965787 systemd-networkd[1383]: lxc_health: Lost carrier Jan 13 20:11:13.972536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410-rootfs.mount: Deactivated successfully. Jan 13 20:11:13.980826 containerd[1445]: time="2025-01-13T20:11:13.980768278Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:11:13.983174 containerd[1445]: time="2025-01-13T20:11:13.983124723Z" level=info msg="shim disconnected" id=5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410 namespace=k8s.io Jan 13 20:11:13.983174 containerd[1445]: time="2025-01-13T20:11:13.983173963Z" level=warning msg="cleaning up after shim disconnected" id=5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410 namespace=k8s.io Jan 13 20:11:13.983286 containerd[1445]: time="2025-01-13T20:11:13.983182523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:13.983744 systemd[1]: cri-containerd-a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e.scope: Deactivated successfully. Jan 13 20:11:13.983982 systemd[1]: cri-containerd-a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e.scope: Consumed 6.676s CPU time. Jan 13 20:11:14.016361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e-rootfs.mount: Deactivated successfully. Jan 13 20:11:14.023471 containerd[1445]: time="2025-01-13T20:11:14.023383670Z" level=info msg="shim disconnected" id=a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e namespace=k8s.io Jan 13 20:11:14.023471 containerd[1445]: time="2025-01-13T20:11:14.023461270Z" level=warning msg="cleaning up after shim disconnected" id=a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e namespace=k8s.io Jan 13 20:11:14.023471 containerd[1445]: time="2025-01-13T20:11:14.023472071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:14.040771 containerd[1445]: time="2025-01-13T20:11:14.040629561Z" level=info msg="StopContainer for \"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e\" returns successfully" Jan 13 20:11:14.041407 containerd[1445]: time="2025-01-13T20:11:14.041359484Z" level=info msg="StopContainer for \"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410\" returns successfully" Jan 13 20:11:14.044628 containerd[1445]: time="2025-01-13T20:11:14.044578053Z" level=info msg="StopPodSandbox for \"2ea650c6e5e5869c70013d72b51b940e42a850b3dd55d9fb4aaeebe30501db25\"" Jan 13 20:11:14.044731 containerd[1445]: time="2025-01-13T20:11:14.044699454Z" level=info msg="StopPodSandbox for \"d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca\"" Jan 13 20:11:14.046968 containerd[1445]: time="2025-01-13T20:11:14.046927140Z" level=info msg="Container to stop \"99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:14.047028 containerd[1445]: time="2025-01-13T20:11:14.046993100Z" level=info msg="Container to stop \"b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:14.047028 containerd[1445]: time="2025-01-13T20:11:14.047011260Z" level=info msg="Container to stop \"99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:14.047028 containerd[1445]: time="2025-01-13T20:11:14.047020660Z" level=info msg="Container to stop \"45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:14.047028 containerd[1445]: time="2025-01-13T20:11:14.047028580Z" level=info msg="Container to stop \"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:14.048637 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca-shm.mount: Deactivated successfully. Jan 13 20:11:14.048995 containerd[1445]: time="2025-01-13T20:11:14.048949786Z" level=info msg="Container to stop \"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:14.053949 systemd[1]: cri-containerd-d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca.scope: Deactivated successfully. Jan 13 20:11:14.056436 systemd[1]: cri-containerd-2ea650c6e5e5869c70013d72b51b940e42a850b3dd55d9fb4aaeebe30501db25.scope: Deactivated successfully. Jan 13 20:11:14.085607 containerd[1445]: time="2025-01-13T20:11:14.085535735Z" level=info msg="shim disconnected" id=2ea650c6e5e5869c70013d72b51b940e42a850b3dd55d9fb4aaeebe30501db25 namespace=k8s.io Jan 13 20:11:14.085607 containerd[1445]: time="2025-01-13T20:11:14.085607575Z" level=warning msg="cleaning up after shim disconnected" id=2ea650c6e5e5869c70013d72b51b940e42a850b3dd55d9fb4aaeebe30501db25 namespace=k8s.io Jan 13 20:11:14.085833 containerd[1445]: time="2025-01-13T20:11:14.085617855Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:14.085833 containerd[1445]: time="2025-01-13T20:11:14.085547935Z" level=info msg="shim disconnected" id=d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca namespace=k8s.io Jan 13 20:11:14.085833 containerd[1445]: time="2025-01-13T20:11:14.085707895Z" level=warning msg="cleaning up after shim disconnected" id=d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca namespace=k8s.io Jan 13 20:11:14.085833 containerd[1445]: time="2025-01-13T20:11:14.085715495Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:14.097239 containerd[1445]: time="2025-01-13T20:11:14.097199129Z" level=info msg="TearDown network for sandbox \"d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca\" successfully" Jan 13 20:11:14.097239 containerd[1445]: time="2025-01-13T20:11:14.097231410Z" level=info msg="StopPodSandbox for \"d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca\" returns successfully" Jan 13 20:11:14.098716 containerd[1445]: time="2025-01-13T20:11:14.098657414Z" level=info msg="TearDown network for sandbox \"2ea650c6e5e5869c70013d72b51b940e42a850b3dd55d9fb4aaeebe30501db25\" successfully" Jan 13 20:11:14.098716 containerd[1445]: time="2025-01-13T20:11:14.098683654Z" level=info msg="StopPodSandbox for \"2ea650c6e5e5869c70013d72b51b940e42a850b3dd55d9fb4aaeebe30501db25\" returns successfully" Jan 13 20:11:14.181438 kubelet[2612]: I0113 20:11:14.180352 2612 scope.go:117] "RemoveContainer" containerID="5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410" Jan 13 20:11:14.200240 containerd[1445]: time="2025-01-13T20:11:14.199848154Z" level=info msg="RemoveContainer for \"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410\"" Jan 13 20:11:14.207429 containerd[1445]: time="2025-01-13T20:11:14.207300096Z" level=info msg="RemoveContainer for \"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410\" returns successfully" Jan 13 20:11:14.207613 kubelet[2612]: I0113 20:11:14.207565 2612 scope.go:117] "RemoveContainer" containerID="5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410" Jan 13 20:11:14.211671 containerd[1445]: time="2025-01-13T20:11:14.207827778Z" level=error msg="ContainerStatus for \"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410\": not found" Jan 13 20:11:14.216870 kubelet[2612]: E0113 20:11:14.216818 2612 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410\": not found" containerID="5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410" Jan 13 20:11:14.219786 kubelet[2612]: I0113 20:11:14.219748 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410"} err="failed to get container status \"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410\": rpc error: code = NotFound desc = an error occurred when try to find container \"5feb10501133673770168247e04674df4d20a7c6a92fffbafcfac3b1e5336410\": not found" Jan 13 20:11:14.219786 kubelet[2612]: I0113 20:11:14.219787 2612 scope.go:117] "RemoveContainer" containerID="a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e" Jan 13 20:11:14.221078 containerd[1445]: time="2025-01-13T20:11:14.221046617Z" level=info msg="RemoveContainer for \"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e\"" Jan 13 20:11:14.224977 containerd[1445]: time="2025-01-13T20:11:14.224949429Z" level=info msg="RemoveContainer for \"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e\" returns successfully" Jan 13 20:11:14.225136 kubelet[2612]: I0113 20:11:14.225113 2612 scope.go:117] "RemoveContainer" containerID="45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e" Jan 13 20:11:14.226329 containerd[1445]: time="2025-01-13T20:11:14.226105712Z" level=info msg="RemoveContainer for \"45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e\"" Jan 13 20:11:14.228494 containerd[1445]: time="2025-01-13T20:11:14.228459399Z" level=info msg="RemoveContainer for \"45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e\" returns successfully" Jan 13 20:11:14.228825 kubelet[2612]: I0113 20:11:14.228779 2612 scope.go:117] "RemoveContainer" containerID="99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1" Jan 13 20:11:14.229728 containerd[1445]: time="2025-01-13T20:11:14.229705123Z" level=info msg="RemoveContainer for \"99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1\"" Jan 13 20:11:14.231995 containerd[1445]: time="2025-01-13T20:11:14.231970050Z" level=info msg="RemoveContainer for \"99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1\" returns successfully" Jan 13 20:11:14.232174 kubelet[2612]: I0113 20:11:14.232155 2612 scope.go:117] "RemoveContainer" containerID="99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65" Jan 13 20:11:14.233028 containerd[1445]: time="2025-01-13T20:11:14.232990733Z" level=info msg="RemoveContainer for \"99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65\"" Jan 13 20:11:14.235252 containerd[1445]: time="2025-01-13T20:11:14.235122219Z" level=info msg="RemoveContainer for \"99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65\" returns successfully" Jan 13 20:11:14.235357 kubelet[2612]: I0113 20:11:14.235333 2612 scope.go:117] "RemoveContainer" containerID="b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb" Jan 13 20:11:14.236274 containerd[1445]: time="2025-01-13T20:11:14.236240342Z" level=info msg="RemoveContainer for \"b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb\"" Jan 13 20:11:14.238245 containerd[1445]: time="2025-01-13T20:11:14.238215828Z" level=info msg="RemoveContainer for \"b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb\" returns successfully" Jan 13 20:11:14.238417 kubelet[2612]: I0113 20:11:14.238373 2612 scope.go:117] "RemoveContainer" containerID="a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e" Jan 13 20:11:14.238604 containerd[1445]: time="2025-01-13T20:11:14.238570749Z" level=error msg="ContainerStatus for \"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e\": not found" Jan 13 20:11:14.238729 kubelet[2612]: E0113 20:11:14.238713 2612 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e\": not found" containerID="a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e" Jan 13 20:11:14.238763 kubelet[2612]: I0113 20:11:14.238750 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e"} err="failed to get container status \"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9fe448632fe0d84e58674b5daeeabc982442b40112218af2b658b8f91b47a7e\": not found" Jan 13 20:11:14.238763 kubelet[2612]: I0113 20:11:14.238760 2612 scope.go:117] "RemoveContainer" containerID="45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e" Jan 13 20:11:14.238925 containerd[1445]: time="2025-01-13T20:11:14.238900910Z" level=error msg="ContainerStatus for \"45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e\": not found" Jan 13 20:11:14.239045 kubelet[2612]: E0113 20:11:14.239024 2612 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e\": not found" containerID="45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e" Jan 13 20:11:14.239078 kubelet[2612]: I0113 20:11:14.239060 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e"} err="failed to get container status \"45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"45db195aecc7c668022d91abbcd15183e7157ae66254a3fed971bb9c4f8e4c1e\": not found" Jan 13 20:11:14.239078 kubelet[2612]: I0113 20:11:14.239071 2612 scope.go:117] "RemoveContainer" containerID="99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1" Jan 13 20:11:14.239266 containerd[1445]: time="2025-01-13T20:11:14.239237511Z" level=error msg="ContainerStatus for \"99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1\": not found" Jan 13 20:11:14.239370 kubelet[2612]: E0113 20:11:14.239353 2612 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1\": not found" containerID="99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1" Jan 13 20:11:14.239401 kubelet[2612]: I0113 20:11:14.239385 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1"} err="failed to get container status \"99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"99146630f822d15e730386e54ea737d53f8d18aad57031d9255d3e9b5d91e7e1\": not found" Jan 13 20:11:14.239401 kubelet[2612]: I0113 20:11:14.239397 2612 scope.go:117] "RemoveContainer" containerID="99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65" Jan 13 20:11:14.239546 containerd[1445]: time="2025-01-13T20:11:14.239515512Z" level=error msg="ContainerStatus for \"99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65\": not found" Jan 13 20:11:14.239657 kubelet[2612]: E0113 20:11:14.239643 2612 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65\": not found" containerID="99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65" Jan 13 20:11:14.239696 kubelet[2612]: I0113 20:11:14.239682 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65"} err="failed to get container status \"99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65\": rpc error: code = NotFound desc = an error occurred when try to find container \"99507c9e366e0c9c17a3f8a41fb2e1c5a0df8f3e7d3bdde381b3223f1b15dc65\": not found" Jan 13 20:11:14.239696 kubelet[2612]: I0113 20:11:14.239693 2612 scope.go:117] "RemoveContainer" containerID="b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb" Jan 13 20:11:14.239909 containerd[1445]: time="2025-01-13T20:11:14.239831113Z" level=error msg="ContainerStatus for \"b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb\": not found" Jan 13 20:11:14.239980 kubelet[2612]: E0113 20:11:14.239968 2612 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb\": not found" containerID="b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb" Jan 13 20:11:14.240010 kubelet[2612]: I0113 20:11:14.239989 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb"} err="failed to get container status \"b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2087431bc150ddfb0d4308371b65fa389a799b4581bcc81ba86eb533a21aecb\": not found" Jan 13 20:11:14.291270 kubelet[2612]: I0113 20:11:14.291229 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-lib-modules\") pod \"20e1414f-f785-4c00-9011-da60587c11f6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " Jan 13 20:11:14.291270 kubelet[2612]: I0113 20:11:14.291274 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-host-proc-sys-kernel\") pod \"20e1414f-f785-4c00-9011-da60587c11f6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " Jan 13 20:11:14.291403 kubelet[2612]: I0113 20:11:14.291304 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct99g\" (UniqueName: \"kubernetes.io/projected/20e1414f-f785-4c00-9011-da60587c11f6-kube-api-access-ct99g\") pod \"20e1414f-f785-4c00-9011-da60587c11f6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " Jan 13 20:11:14.291403 kubelet[2612]: I0113 20:11:14.291322 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-host-proc-sys-net\") pod \"20e1414f-f785-4c00-9011-da60587c11f6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " Jan 13 20:11:14.291403 kubelet[2612]: I0113 20:11:14.291345 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/244a5a16-e215-4461-95cd-0b6d95e31d0e-cilium-config-path\") pod \"244a5a16-e215-4461-95cd-0b6d95e31d0e\" (UID: \"244a5a16-e215-4461-95cd-0b6d95e31d0e\") " Jan 13 20:11:14.291403 kubelet[2612]: I0113 20:11:14.291362 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-bpf-maps\") pod \"20e1414f-f785-4c00-9011-da60587c11f6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " Jan 13 20:11:14.291403 kubelet[2612]: I0113 20:11:14.291404 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-etc-cni-netd\") pod \"20e1414f-f785-4c00-9011-da60587c11f6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " Jan 13 20:11:14.291506 kubelet[2612]: I0113 20:11:14.291426 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20e1414f-f785-4c00-9011-da60587c11f6-clustermesh-secrets\") pod \"20e1414f-f785-4c00-9011-da60587c11f6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " Jan 13 20:11:14.291506 kubelet[2612]: I0113 20:11:14.291446 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20e1414f-f785-4c00-9011-da60587c11f6-hubble-tls\") pod \"20e1414f-f785-4c00-9011-da60587c11f6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " Jan 13 20:11:14.291506 kubelet[2612]: I0113 20:11:14.291464 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20e1414f-f785-4c00-9011-da60587c11f6-cilium-config-path\") pod \"20e1414f-f785-4c00-9011-da60587c11f6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " Jan 13 20:11:14.291506 kubelet[2612]: I0113 20:11:14.291486 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmn5m\" (UniqueName: \"kubernetes.io/projected/244a5a16-e215-4461-95cd-0b6d95e31d0e-kube-api-access-vmn5m\") pod \"244a5a16-e215-4461-95cd-0b6d95e31d0e\" (UID: \"244a5a16-e215-4461-95cd-0b6d95e31d0e\") " Jan 13 20:11:14.291506 kubelet[2612]: I0113 20:11:14.291504 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-cilium-cgroup\") pod \"20e1414f-f785-4c00-9011-da60587c11f6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " Jan 13 20:11:14.291653 kubelet[2612]: I0113 20:11:14.291530 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-cni-path\") pod \"20e1414f-f785-4c00-9011-da60587c11f6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " Jan 13 20:11:14.291653 kubelet[2612]: I0113 20:11:14.291551 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-xtables-lock\") pod \"20e1414f-f785-4c00-9011-da60587c11f6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " Jan 13 20:11:14.291653 kubelet[2612]: I0113 20:11:14.291568 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-hostproc\") pod \"20e1414f-f785-4c00-9011-da60587c11f6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " Jan 13 20:11:14.291653 kubelet[2612]: I0113 20:11:14.291585 2612 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-cilium-run\") pod \"20e1414f-f785-4c00-9011-da60587c11f6\" (UID: \"20e1414f-f785-4c00-9011-da60587c11f6\") " Jan 13 20:11:14.295293 kubelet[2612]: I0113 20:11:14.295229 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "20e1414f-f785-4c00-9011-da60587c11f6" (UID: "20e1414f-f785-4c00-9011-da60587c11f6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:14.295293 kubelet[2612]: I0113 20:11:14.295261 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "20e1414f-f785-4c00-9011-da60587c11f6" (UID: "20e1414f-f785-4c00-9011-da60587c11f6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:14.295392 kubelet[2612]: I0113 20:11:14.295324 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-cni-path" (OuterVolumeSpecName: "cni-path") pod "20e1414f-f785-4c00-9011-da60587c11f6" (UID: "20e1414f-f785-4c00-9011-da60587c11f6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:14.295392 kubelet[2612]: I0113 20:11:14.295345 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "20e1414f-f785-4c00-9011-da60587c11f6" (UID: "20e1414f-f785-4c00-9011-da60587c11f6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:14.295392 kubelet[2612]: I0113 20:11:14.295361 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-hostproc" (OuterVolumeSpecName: "hostproc") pod "20e1414f-f785-4c00-9011-da60587c11f6" (UID: "20e1414f-f785-4c00-9011-da60587c11f6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:14.295392 kubelet[2612]: I0113 20:11:14.295376 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "20e1414f-f785-4c00-9011-da60587c11f6" (UID: "20e1414f-f785-4c00-9011-da60587c11f6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:14.295477 kubelet[2612]: I0113 20:11:14.295392 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "20e1414f-f785-4c00-9011-da60587c11f6" (UID: "20e1414f-f785-4c00-9011-da60587c11f6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:14.295477 kubelet[2612]: I0113 20:11:14.295406 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "20e1414f-f785-4c00-9011-da60587c11f6" (UID: "20e1414f-f785-4c00-9011-da60587c11f6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:14.296653 kubelet[2612]: I0113 20:11:14.296389 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "20e1414f-f785-4c00-9011-da60587c11f6" (UID: "20e1414f-f785-4c00-9011-da60587c11f6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:14.297912 kubelet[2612]: I0113 20:11:14.297880 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/244a5a16-e215-4461-95cd-0b6d95e31d0e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "244a5a16-e215-4461-95cd-0b6d95e31d0e" (UID: "244a5a16-e215-4461-95cd-0b6d95e31d0e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:11:14.297985 kubelet[2612]: I0113 20:11:14.297928 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "20e1414f-f785-4c00-9011-da60587c11f6" (UID: "20e1414f-f785-4c00-9011-da60587c11f6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:14.299850 kubelet[2612]: I0113 20:11:14.299711 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e1414f-f785-4c00-9011-da60587c11f6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "20e1414f-f785-4c00-9011-da60587c11f6" (UID: "20e1414f-f785-4c00-9011-da60587c11f6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:11:14.299850 kubelet[2612]: I0113 20:11:14.299782 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e1414f-f785-4c00-9011-da60587c11f6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "20e1414f-f785-4c00-9011-da60587c11f6" (UID: "20e1414f-f785-4c00-9011-da60587c11f6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:11:14.299850 kubelet[2612]: I0113 20:11:14.299837 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e1414f-f785-4c00-9011-da60587c11f6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "20e1414f-f785-4c00-9011-da60587c11f6" (UID: "20e1414f-f785-4c00-9011-da60587c11f6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:11:14.300107 kubelet[2612]: I0113 20:11:14.300071 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e1414f-f785-4c00-9011-da60587c11f6-kube-api-access-ct99g" (OuterVolumeSpecName: "kube-api-access-ct99g") pod "20e1414f-f785-4c00-9011-da60587c11f6" (UID: "20e1414f-f785-4c00-9011-da60587c11f6"). InnerVolumeSpecName "kube-api-access-ct99g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:11:14.301558 kubelet[2612]: I0113 20:11:14.301517 2612 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/244a5a16-e215-4461-95cd-0b6d95e31d0e-kube-api-access-vmn5m" (OuterVolumeSpecName: "kube-api-access-vmn5m") pod "244a5a16-e215-4461-95cd-0b6d95e31d0e" (UID: "244a5a16-e215-4461-95cd-0b6d95e31d0e"). InnerVolumeSpecName "kube-api-access-vmn5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:11:14.392653 kubelet[2612]: I0113 20:11:14.392426 2612 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392653 kubelet[2612]: I0113 20:11:14.392457 2612 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392653 kubelet[2612]: I0113 20:11:14.392468 2612 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20e1414f-f785-4c00-9011-da60587c11f6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392653 kubelet[2612]: I0113 20:11:14.392479 2612 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20e1414f-f785-4c00-9011-da60587c11f6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392653 kubelet[2612]: I0113 20:11:14.392489 2612 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20e1414f-f785-4c00-9011-da60587c11f6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392653 kubelet[2612]: I0113 20:11:14.392499 2612 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vmn5m\" (UniqueName: \"kubernetes.io/projected/244a5a16-e215-4461-95cd-0b6d95e31d0e-kube-api-access-vmn5m\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392653 kubelet[2612]: I0113 20:11:14.392509 2612 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392653 kubelet[2612]: I0113 20:11:14.392517 2612 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392882 kubelet[2612]: I0113 20:11:14.392588 2612 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392882 kubelet[2612]: I0113 20:11:14.392629 2612 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392882 kubelet[2612]: I0113 20:11:14.392641 2612 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392882 kubelet[2612]: I0113 20:11:14.392652 2612 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392882 kubelet[2612]: I0113 20:11:14.392662 2612 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392882 kubelet[2612]: I0113 20:11:14.392671 2612 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ct99g\" (UniqueName: \"kubernetes.io/projected/20e1414f-f785-4c00-9011-da60587c11f6-kube-api-access-ct99g\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392882 kubelet[2612]: I0113 20:11:14.392680 2612 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20e1414f-f785-4c00-9011-da60587c11f6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.392882 kubelet[2612]: I0113 20:11:14.392691 2612 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/244a5a16-e215-4461-95cd-0b6d95e31d0e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 13 20:11:14.468977 systemd[1]: Removed slice kubepods-besteffort-pod244a5a16_e215_4461_95cd_0b6d95e31d0e.slice - libcontainer container kubepods-besteffort-pod244a5a16_e215_4461_95cd_0b6d95e31d0e.slice. Jan 13 20:11:14.486917 systemd[1]: Removed slice kubepods-burstable-pod20e1414f_f785_4c00_9011_da60587c11f6.slice - libcontainer container kubepods-burstable-pod20e1414f_f785_4c00_9011_da60587c11f6.slice. Jan 13 20:11:14.487004 systemd[1]: kubepods-burstable-pod20e1414f_f785_4c00_9011_da60587c11f6.slice: Consumed 6.829s CPU time. Jan 13 20:11:14.936007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ea650c6e5e5869c70013d72b51b940e42a850b3dd55d9fb4aaeebe30501db25-rootfs.mount: Deactivated successfully. Jan 13 20:11:14.936105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ea650c6e5e5869c70013d72b51b940e42a850b3dd55d9fb4aaeebe30501db25-shm.mount: Deactivated successfully. Jan 13 20:11:14.936166 systemd[1]: var-lib-kubelet-pods-244a5a16\x2de215\x2d4461\x2d95cd\x2d0b6d95e31d0e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvmn5m.mount: Deactivated successfully. Jan 13 20:11:14.936227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d754a49fb540c5605ed1af025d8d8f53417364fdc87688fbd68483013227b4ca-rootfs.mount: Deactivated successfully. Jan 13 20:11:14.936284 systemd[1]: var-lib-kubelet-pods-20e1414f\x2df785\x2d4c00\x2d9011\x2dda60587c11f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dct99g.mount: Deactivated successfully. Jan 13 20:11:14.936334 systemd[1]: var-lib-kubelet-pods-20e1414f\x2df785\x2d4c00\x2d9011\x2dda60587c11f6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:11:14.936387 systemd[1]: var-lib-kubelet-pods-20e1414f\x2df785\x2d4c00\x2d9011\x2dda60587c11f6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:11:14.942903 kubelet[2612]: I0113 20:11:14.942867 2612 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="20e1414f-f785-4c00-9011-da60587c11f6" path="/var/lib/kubelet/pods/20e1414f-f785-4c00-9011-da60587c11f6/volumes" Jan 13 20:11:14.943432 kubelet[2612]: I0113 20:11:14.943412 2612 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="244a5a16-e215-4461-95cd-0b6d95e31d0e" path="/var/lib/kubelet/pods/244a5a16-e215-4461-95cd-0b6d95e31d0e/volumes" Jan 13 20:11:15.869577 sshd[4265]: Connection closed by 10.0.0.1 port 39010 Jan 13 20:11:15.869843 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:15.883273 systemd[1]: sshd@23-10.0.0.49:22-10.0.0.1:39010.service: Deactivated successfully. Jan 13 20:11:15.886286 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:11:15.886573 systemd[1]: session-24.scope: Consumed 1.330s CPU time. Jan 13 20:11:15.887954 systemd-logind[1427]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:11:15.892887 systemd[1]: Started sshd@24-10.0.0.49:22-10.0.0.1:42536.service - OpenSSH per-connection server daemon (10.0.0.1:42536). Jan 13 20:11:15.893833 systemd-logind[1427]: Removed session 24. Jan 13 20:11:15.928954 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 42536 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:11:15.930132 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:15.934193 systemd-logind[1427]: New session 25 of user core. Jan 13 20:11:15.943751 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:11:16.783526 sshd[4427]: Connection closed by 10.0.0.1 port 42536 Jan 13 20:11:16.785681 sshd-session[4425]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:16.797203 systemd[1]: sshd@24-10.0.0.49:22-10.0.0.1:42536.service: Deactivated successfully. Jan 13 20:11:16.800797 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:11:16.805149 systemd-logind[1427]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:11:16.809619 kubelet[2612]: I0113 20:11:16.809449 2612 topology_manager.go:215] "Topology Admit Handler" podUID="3d1a26f2-bf90-44ca-a131-a344181890ed" podNamespace="kube-system" podName="cilium-qxbmv" Jan 13 20:11:16.809619 kubelet[2612]: E0113 20:11:16.809529 2612 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20e1414f-f785-4c00-9011-da60587c11f6" containerName="mount-bpf-fs" Jan 13 20:11:16.809619 kubelet[2612]: E0113 20:11:16.809541 2612 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20e1414f-f785-4c00-9011-da60587c11f6" containerName="cilium-agent" Jan 13 20:11:16.809619 kubelet[2612]: E0113 20:11:16.809549 2612 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="244a5a16-e215-4461-95cd-0b6d95e31d0e" containerName="cilium-operator" Jan 13 20:11:16.809619 kubelet[2612]: E0113 20:11:16.809555 2612 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20e1414f-f785-4c00-9011-da60587c11f6" containerName="mount-cgroup" Jan 13 20:11:16.809619 kubelet[2612]: E0113 20:11:16.809562 2612 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20e1414f-f785-4c00-9011-da60587c11f6" containerName="apply-sysctl-overwrites" Jan 13 20:11:16.809619 kubelet[2612]: E0113 20:11:16.809569 2612 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20e1414f-f785-4c00-9011-da60587c11f6" containerName="clean-cilium-state" Jan 13 20:11:16.814022 kubelet[2612]: I0113 20:11:16.813906 2612 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e1414f-f785-4c00-9011-da60587c11f6" containerName="cilium-agent" Jan 13 20:11:16.814022 kubelet[2612]: I0113 20:11:16.813950 2612 memory_manager.go:354] "RemoveStaleState removing state" podUID="244a5a16-e215-4461-95cd-0b6d95e31d0e" containerName="cilium-operator" Jan 13 20:11:16.814921 systemd[1]: Started sshd@25-10.0.0.49:22-10.0.0.1:42542.service - OpenSSH per-connection server daemon (10.0.0.1:42542). Jan 13 20:11:16.819647 systemd-logind[1427]: Removed session 25. Jan 13 20:11:16.827677 systemd[1]: Created slice kubepods-burstable-pod3d1a26f2_bf90_44ca_a131_a344181890ed.slice - libcontainer container kubepods-burstable-pod3d1a26f2_bf90_44ca_a131_a344181890ed.slice. Jan 13 20:11:16.851503 sshd[4438]: Accepted publickey for core from 10.0.0.1 port 42542 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:11:16.852854 sshd-session[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:16.856591 systemd-logind[1427]: New session 26 of user core. Jan 13 20:11:16.866776 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:11:16.908511 kubelet[2612]: I0113 20:11:16.908462 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d1a26f2-bf90-44ca-a131-a344181890ed-bpf-maps\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.908638 kubelet[2612]: I0113 20:11:16.908588 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d1a26f2-bf90-44ca-a131-a344181890ed-xtables-lock\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.908697 kubelet[2612]: I0113 20:11:16.908667 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d1a26f2-bf90-44ca-a131-a344181890ed-clustermesh-secrets\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.908758 kubelet[2612]: I0113 20:11:16.908698 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d1a26f2-bf90-44ca-a131-a344181890ed-host-proc-sys-kernel\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.908785 kubelet[2612]: I0113 20:11:16.908764 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d1a26f2-bf90-44ca-a131-a344181890ed-cni-path\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.908806 kubelet[2612]: I0113 20:11:16.908788 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d1a26f2-bf90-44ca-a131-a344181890ed-host-proc-sys-net\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.908871 kubelet[2612]: I0113 20:11:16.908847 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d1a26f2-bf90-44ca-a131-a344181890ed-cilium-run\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.908900 kubelet[2612]: I0113 20:11:16.908875 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jmvn\" (UniqueName: \"kubernetes.io/projected/3d1a26f2-bf90-44ca-a131-a344181890ed-kube-api-access-5jmvn\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.908900 kubelet[2612]: I0113 20:11:16.908895 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d1a26f2-bf90-44ca-a131-a344181890ed-lib-modules\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.908987 kubelet[2612]: I0113 20:11:16.908953 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d1a26f2-bf90-44ca-a131-a344181890ed-cilium-cgroup\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.909012 kubelet[2612]: I0113 20:11:16.909005 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d1a26f2-bf90-44ca-a131-a344181890ed-cilium-config-path\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.909050 kubelet[2612]: I0113 20:11:16.909040 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d1a26f2-bf90-44ca-a131-a344181890ed-hostproc\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.909076 kubelet[2612]: I0113 20:11:16.909070 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d1a26f2-bf90-44ca-a131-a344181890ed-etc-cni-netd\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.909101 kubelet[2612]: I0113 20:11:16.909089 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3d1a26f2-bf90-44ca-a131-a344181890ed-cilium-ipsec-secrets\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.909126 kubelet[2612]: I0113 20:11:16.909116 2612 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d1a26f2-bf90-44ca-a131-a344181890ed-hubble-tls\") pod \"cilium-qxbmv\" (UID: \"3d1a26f2-bf90-44ca-a131-a344181890ed\") " pod="kube-system/cilium-qxbmv" Jan 13 20:11:16.917509 sshd[4440]: Connection closed by 10.0.0.1 port 42542 Jan 13 20:11:16.917828 sshd-session[4438]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:16.931589 systemd[1]: sshd@25-10.0.0.49:22-10.0.0.1:42542.service: Deactivated successfully. Jan 13 20:11:16.935850 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:11:16.937125 systemd-logind[1427]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:11:16.943944 systemd[1]: Started sshd@26-10.0.0.49:22-10.0.0.1:42552.service - OpenSSH per-connection server daemon (10.0.0.1:42552). Jan 13 20:11:16.945071 systemd-logind[1427]: Removed session 26. Jan 13 20:11:16.981080 sshd[4446]: Accepted publickey for core from 10.0.0.1 port 42552 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:11:16.982361 sshd-session[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:16.984356 kubelet[2612]: E0113 20:11:16.984237 2612 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:11:16.986887 systemd-logind[1427]: New session 27 of user core. Jan 13 20:11:16.994767 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:11:17.136947 kubelet[2612]: E0113 20:11:17.136482 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:11:17.137206 containerd[1445]: time="2025-01-13T20:11:17.137009648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qxbmv,Uid:3d1a26f2-bf90-44ca-a131-a344181890ed,Namespace:kube-system,Attempt:0,}" Jan 13 20:11:17.164957 containerd[1445]: time="2025-01-13T20:11:17.164846466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:11:17.164957 containerd[1445]: time="2025-01-13T20:11:17.164915266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:11:17.164957 containerd[1445]: time="2025-01-13T20:11:17.164928186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:11:17.165171 containerd[1445]: time="2025-01-13T20:11:17.165011627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:11:17.183781 systemd[1]: Started cri-containerd-d779f02d403d6d68e8c86c18b88e94dce7889e954064def885aecadcc4681955.scope - libcontainer container d779f02d403d6d68e8c86c18b88e94dce7889e954064def885aecadcc4681955. Jan 13 20:11:17.202046 containerd[1445]: time="2025-01-13T20:11:17.201997370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qxbmv,Uid:3d1a26f2-bf90-44ca-a131-a344181890ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"d779f02d403d6d68e8c86c18b88e94dce7889e954064def885aecadcc4681955\"" Jan 13 20:11:17.203949 kubelet[2612]: E0113 20:11:17.202868 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:11:17.205959 containerd[1445]: time="2025-01-13T20:11:17.205924389Z" level=info msg="CreateContainer within sandbox \"d779f02d403d6d68e8c86c18b88e94dce7889e954064def885aecadcc4681955\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:11:17.223137 containerd[1445]: time="2025-01-13T20:11:17.223088154Z" level=info msg="CreateContainer within sandbox \"d779f02d403d6d68e8c86c18b88e94dce7889e954064def885aecadcc4681955\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"506e7c7345f92fc745269439ff83afba64c20e344199de29b258523990df8d4f\"" Jan 13 20:11:17.223620 containerd[1445]: time="2025-01-13T20:11:17.223505916Z" level=info msg="StartContainer for \"506e7c7345f92fc745269439ff83afba64c20e344199de29b258523990df8d4f\"" Jan 13 20:11:17.245742 systemd[1]: Started cri-containerd-506e7c7345f92fc745269439ff83afba64c20e344199de29b258523990df8d4f.scope - libcontainer container 506e7c7345f92fc745269439ff83afba64c20e344199de29b258523990df8d4f. Jan 13 20:11:17.266976 containerd[1445]: time="2025-01-13T20:11:17.266879931Z" level=info msg="StartContainer for \"506e7c7345f92fc745269439ff83afba64c20e344199de29b258523990df8d4f\" returns successfully" Jan 13 20:11:17.280725 systemd[1]: cri-containerd-506e7c7345f92fc745269439ff83afba64c20e344199de29b258523990df8d4f.scope: Deactivated successfully. Jan 13 20:11:17.305124 containerd[1445]: time="2025-01-13T20:11:17.305046400Z" level=info msg="shim disconnected" id=506e7c7345f92fc745269439ff83afba64c20e344199de29b258523990df8d4f namespace=k8s.io Jan 13 20:11:17.305124 containerd[1445]: time="2025-01-13T20:11:17.305097000Z" level=warning msg="cleaning up after shim disconnected" id=506e7c7345f92fc745269439ff83afba64c20e344199de29b258523990df8d4f namespace=k8s.io Jan 13 20:11:17.305124 containerd[1445]: time="2025-01-13T20:11:17.305106800Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:17.315198 containerd[1445]: time="2025-01-13T20:11:17.315155330Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:11:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:11:18.184866 kubelet[2612]: E0113 20:11:18.184838 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:11:18.186751 containerd[1445]: time="2025-01-13T20:11:18.186717840Z" level=info msg="CreateContainer within sandbox \"d779f02d403d6d68e8c86c18b88e94dce7889e954064def885aecadcc4681955\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:11:18.206091 containerd[1445]: time="2025-01-13T20:11:18.206041948Z" level=info msg="CreateContainer within sandbox \"d779f02d403d6d68e8c86c18b88e94dce7889e954064def885aecadcc4681955\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9091ec1f182f4bd767a4c774b1941dd88a3eb8dfa6a3ba8f206f276c1045b88b\"" Jan 13 20:11:18.206556 containerd[1445]: time="2025-01-13T20:11:18.206515190Z" level=info msg="StartContainer for \"9091ec1f182f4bd767a4c774b1941dd88a3eb8dfa6a3ba8f206f276c1045b88b\"" Jan 13 20:11:18.233739 systemd[1]: Started cri-containerd-9091ec1f182f4bd767a4c774b1941dd88a3eb8dfa6a3ba8f206f276c1045b88b.scope - libcontainer container 9091ec1f182f4bd767a4c774b1941dd88a3eb8dfa6a3ba8f206f276c1045b88b. Jan 13 20:11:18.251877 containerd[1445]: time="2025-01-13T20:11:18.251822683Z" level=info msg="StartContainer for \"9091ec1f182f4bd767a4c774b1941dd88a3eb8dfa6a3ba8f206f276c1045b88b\" returns successfully" Jan 13 20:11:18.258434 systemd[1]: cri-containerd-9091ec1f182f4bd767a4c774b1941dd88a3eb8dfa6a3ba8f206f276c1045b88b.scope: Deactivated successfully. Jan 13 20:11:18.287296 containerd[1445]: time="2025-01-13T20:11:18.287210920Z" level=info msg="shim disconnected" id=9091ec1f182f4bd767a4c774b1941dd88a3eb8dfa6a3ba8f206f276c1045b88b namespace=k8s.io Jan 13 20:11:18.287296 containerd[1445]: time="2025-01-13T20:11:18.287284040Z" level=warning msg="cleaning up after shim disconnected" id=9091ec1f182f4bd767a4c774b1941dd88a3eb8dfa6a3ba8f206f276c1045b88b namespace=k8s.io Jan 13 20:11:18.287296 containerd[1445]: time="2025-01-13T20:11:18.287294400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:18.371770 kubelet[2612]: I0113 20:11:18.371670 2612 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:11:18Z","lastTransitionTime":"2025-01-13T20:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:11:19.014129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9091ec1f182f4bd767a4c774b1941dd88a3eb8dfa6a3ba8f206f276c1045b88b-rootfs.mount: Deactivated successfully. Jan 13 20:11:19.188619 kubelet[2612]: E0113 20:11:19.188409 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:11:19.195038 containerd[1445]: time="2025-01-13T20:11:19.191846393Z" level=info msg="CreateContainer within sandbox \"d779f02d403d6d68e8c86c18b88e94dce7889e954064def885aecadcc4681955\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:11:19.203885 containerd[1445]: time="2025-01-13T20:11:19.203849587Z" level=info msg="CreateContainer within sandbox \"d779f02d403d6d68e8c86c18b88e94dce7889e954064def885aecadcc4681955\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6f7588b6916a78aaa782bf82b7f46bff4c9e6aaeac83042cc1fcc09fc384248e\"" Jan 13 20:11:19.204985 containerd[1445]: time="2025-01-13T20:11:19.204881593Z" level=info msg="StartContainer for \"6f7588b6916a78aaa782bf82b7f46bff4c9e6aaeac83042cc1fcc09fc384248e\"" Jan 13 20:11:19.231827 systemd[1]: Started cri-containerd-6f7588b6916a78aaa782bf82b7f46bff4c9e6aaeac83042cc1fcc09fc384248e.scope - libcontainer container 6f7588b6916a78aaa782bf82b7f46bff4c9e6aaeac83042cc1fcc09fc384248e. Jan 13 20:11:19.254386 containerd[1445]: time="2025-01-13T20:11:19.254342339Z" level=info msg="StartContainer for \"6f7588b6916a78aaa782bf82b7f46bff4c9e6aaeac83042cc1fcc09fc384248e\" returns successfully" Jan 13 20:11:19.255568 systemd[1]: cri-containerd-6f7588b6916a78aaa782bf82b7f46bff4c9e6aaeac83042cc1fcc09fc384248e.scope: Deactivated successfully. Jan 13 20:11:19.276414 containerd[1445]: time="2025-01-13T20:11:19.276297434Z" level=info msg="shim disconnected" id=6f7588b6916a78aaa782bf82b7f46bff4c9e6aaeac83042cc1fcc09fc384248e namespace=k8s.io Jan 13 20:11:19.276414 containerd[1445]: time="2025-01-13T20:11:19.276347554Z" level=warning msg="cleaning up after shim disconnected" id=6f7588b6916a78aaa782bf82b7f46bff4c9e6aaeac83042cc1fcc09fc384248e namespace=k8s.io Jan 13 20:11:19.276414 containerd[1445]: time="2025-01-13T20:11:19.276355394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:20.014305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f7588b6916a78aaa782bf82b7f46bff4c9e6aaeac83042cc1fcc09fc384248e-rootfs.mount: Deactivated successfully. Jan 13 20:11:20.192665 kubelet[2612]: E0113 20:11:20.192630 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:11:20.195551 containerd[1445]: time="2025-01-13T20:11:20.195457298Z" level=info msg="CreateContainer within sandbox \"d779f02d403d6d68e8c86c18b88e94dce7889e954064def885aecadcc4681955\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:11:20.210730 containerd[1445]: time="2025-01-13T20:11:20.210688081Z" level=info msg="CreateContainer within sandbox \"d779f02d403d6d68e8c86c18b88e94dce7889e954064def885aecadcc4681955\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2930e4ed1438ada8e05876ee6a883ad79cd8588d9ae0a902f097c27d749d6d8c\"" Jan 13 20:11:20.212630 containerd[1445]: time="2025-01-13T20:11:20.211275805Z" level=info msg="StartContainer for \"2930e4ed1438ada8e05876ee6a883ad79cd8588d9ae0a902f097c27d749d6d8c\"" Jan 13 20:11:20.240767 systemd[1]: Started cri-containerd-2930e4ed1438ada8e05876ee6a883ad79cd8588d9ae0a902f097c27d749d6d8c.scope - libcontainer container 2930e4ed1438ada8e05876ee6a883ad79cd8588d9ae0a902f097c27d749d6d8c. Jan 13 20:11:20.258741 systemd[1]: cri-containerd-2930e4ed1438ada8e05876ee6a883ad79cd8588d9ae0a902f097c27d749d6d8c.scope: Deactivated successfully. Jan 13 20:11:20.261159 containerd[1445]: time="2025-01-13T20:11:20.261124462Z" level=info msg="StartContainer for \"2930e4ed1438ada8e05876ee6a883ad79cd8588d9ae0a902f097c27d749d6d8c\" returns successfully" Jan 13 20:11:20.280484 containerd[1445]: time="2025-01-13T20:11:20.280350992Z" level=info msg="shim disconnected" id=2930e4ed1438ada8e05876ee6a883ad79cd8588d9ae0a902f097c27d749d6d8c namespace=k8s.io Jan 13 20:11:20.280484 containerd[1445]: time="2025-01-13T20:11:20.280406712Z" level=warning msg="cleaning up after shim disconnected" id=2930e4ed1438ada8e05876ee6a883ad79cd8588d9ae0a902f097c27d749d6d8c namespace=k8s.io Jan 13 20:11:20.280484 containerd[1445]: time="2025-01-13T20:11:20.280416912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:20.942038 kubelet[2612]: E0113 20:11:20.941844 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:11:21.014367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2930e4ed1438ada8e05876ee6a883ad79cd8588d9ae0a902f097c27d749d6d8c-rootfs.mount: Deactivated successfully. Jan 13 20:11:21.195105 kubelet[2612]: E0113 20:11:21.195001 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:11:21.197651 containerd[1445]: time="2025-01-13T20:11:21.197592096Z" level=info msg="CreateContainer within sandbox \"d779f02d403d6d68e8c86c18b88e94dce7889e954064def885aecadcc4681955\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:11:21.210643 containerd[1445]: time="2025-01-13T20:11:21.210605591Z" level=info msg="CreateContainer within sandbox \"d779f02d403d6d68e8c86c18b88e94dce7889e954064def885aecadcc4681955\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"33d9334fa66b26cdc973c5b54cab8d3882791dc5c32fa42e413a84d81112f024\"" Jan 13 20:11:21.211465 containerd[1445]: time="2025-01-13T20:11:21.211421957Z" level=info msg="StartContainer for \"33d9334fa66b26cdc973c5b54cab8d3882791dc5c32fa42e413a84d81112f024\"" Jan 13 20:11:21.242759 systemd[1]: Started cri-containerd-33d9334fa66b26cdc973c5b54cab8d3882791dc5c32fa42e413a84d81112f024.scope - libcontainer container 33d9334fa66b26cdc973c5b54cab8d3882791dc5c32fa42e413a84d81112f024. Jan 13 20:11:21.265248 containerd[1445]: time="2025-01-13T20:11:21.265130430Z" level=info msg="StartContainer for \"33d9334fa66b26cdc973c5b54cab8d3882791dc5c32fa42e413a84d81112f024\" returns successfully" Jan 13 20:11:21.509678 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:11:22.199723 kubelet[2612]: E0113 20:11:22.199696 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:11:22.213985 kubelet[2612]: I0113 20:11:22.213938 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qxbmv" podStartSLOduration=6.213903326 podStartE2EDuration="6.213903326s" podCreationTimestamp="2025-01-13 20:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:11:22.213873966 +0000 UTC m=+85.368844510" watchObservedRunningTime="2025-01-13 20:11:22.213903326 +0000 UTC m=+85.368873870" Jan 13 20:11:22.944685 kubelet[2612]: E0113 20:11:22.942641 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:11:23.201842 kubelet[2612]: E0113 20:11:23.201744 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:11:24.300959 systemd-networkd[1383]: lxc_health: Link UP Jan 13 20:11:24.305715 systemd-networkd[1383]: lxc_health: Gained carrier Jan 13 20:11:25.138525 kubelet[2612]: E0113 20:11:25.138494 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:11:25.205477 kubelet[2612]: E0113 20:11:25.205423 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:11:25.439925 systemd[1]: run-containerd-runc-k8s.io-33d9334fa66b26cdc973c5b54cab8d3882791dc5c32fa42e413a84d81112f024-runc.eT1R0b.mount: Deactivated successfully. Jan 13 20:11:25.937776 systemd-networkd[1383]: lxc_health: Gained IPv6LL Jan 13 20:11:26.207284 kubelet[2612]: E0113 20:11:26.206998 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:11:33.981652 sshd[4448]: Connection closed by 10.0.0.1 port 42552 Jan 13 20:11:33.981731 sshd-session[4446]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:33.985508 systemd[1]: sshd@26-10.0.0.49:22-10.0.0.1:42552.service: Deactivated successfully. Jan 13 20:11:33.989106 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:11:33.989809 systemd-logind[1427]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:11:33.990572 systemd-logind[1427]: Removed session 27. Jan 13 20:11:34.941184 kubelet[2612]: E0113 20:11:34.941122 2612 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"